Key takeaways (TL;DR)
2025 marks the real shift from the “honor system” to effective age-verification controls on social media.
Australia is preparing a ban for under-16s, the UK requires “highly effective” methods, and the EU is pushing an interoperable, privacy-first approach.
The best balance between compliance and UX combines AI-based age estimation with liveness as the first line, and documentary fallback only when confidence is low.
Penalties can reach £18 million or 10% of global revenue in the UK and up to 6% in the EU; Australia is planning multimillion-dollar fines with its new <16 regime.
Social platforms are entering a new stage where age verification is mandatory. For fast-growing services with a high share of minors, the question is no longer whether to implement it, but how to do it with low friction, global coverage, and strong privacy.
Regulators have moved. In 2025, Australia is advancing toward a ban for users under 16 with inclusion by service; the United Kingdom demands “highly effective” checks; and the European Union is launching an interoperable pilot with digital identity and age proofs. The result is a landscape where assurance and verification coexist—and where AI-based age estimation is emerging as the preferred pattern for balancing compliance and conversion.
Why age verification is no longer optional
Regulatory pressure has accelerated via three vectors:
- Child health and safety
- Platform accountability for harmful content and addictive design
- International alignment of standards
In this context, regulators want social networks to move from simple self-attestation (“Yes, I’m over XX”) to verifiable methods and age-appropriate design. In Europe, eleven countries asked the Commission to “abandon the status quo” and require effective age verification, looking toward an interoperable solution such as identity wallets within a common framework.
Regulatory map in 2025–2026: Australia, the United Kingdom, and the European Union
- Australia. The government is notifying social platforms to determine which services must prohibit accounts for under-16s and how enforcement will work. Services like WhatsApp, Roblox, Reddit, or Discord are on the regulator’s radar. The start date is slated for December 2025, with a self-assessment process and potential scope disputes by service type.
- United Kingdom. The Online Safety Act (official explainer) requires user-to-user services to prevent minors from accessing harmful content and enables significant sanctions for non-compliance.
- European Union. Spain and ten other countries want age verification mandatory for access to social networks. The goal: an interoperable network and privacy-preserving age proofs aligned with the EUDI Wallet and the Commission’s age-assurance blueprint.
Obligations by region (2025)
Will age verification become mandatory across the EU?
All signs point to age verification becoming mandatory for social networks in the European Union sooner rather than later. What remains open is how platforms must verify users. The direction points to modular integrations supporting verifiable credentials and reusable age proofs across services, with a privacy-preserving approach detailed by the Commission (see the age-assurance blueprint).
What are the practical implications?
To operate safely and comply, social platforms should:
- Design region-aware workflows.
- Calibrate thresholds (13+, 16+, 18+) per product.
- Measure verification flow performance (abandonment, verification time, etc.) and produce audit-ready reports for regulators.
Fines, risks, and reputation: the cost of non-compliance
Doing nothing brings penalties. In the UK, fines can reach up to £18 million or 10% of global revenue (whichever is higher), plus service blocking in severe cases, per the Online Safety Act explainer. In the EU, under the DSA, penalties can be up to 6% of worldwide turnover along with corrective orders, while Australia is preparing a multimillion-dollar fine regime as the under-16 ban takes effect (ABC coverage).
Beyond the financial hit, reputational exposure and operational drag (audits, remediation plans, restrictions) can harm growth and trust with users, brands, and authorities.
How can social networks verify age?
Before picking technology, distinguish verification (conclusive proof) from assurance (high probability). In practice, many platforms combine AI-based age estimation as the first line, with documentary fallback in edge cases.
Social platforms can use various methods to verify user age. Compare them in the table below.
Why is there opposition to age verification on social platforms?
The main arguments involve privacy, accuracy, and exclusion:
- Privacy & surveillance: risk of mass data collection if the architecture is centralized.
- Bias & accuracy: some models perform unevenly across demographics.
- Exclusion: users without documents or in poor capture conditions may be left out if only one method is allowed.
These tensions coexist with the need to reduce harm to minors, which is why regulators promote privacy-preserving architectures and interoperable frameworks.
How to verify age online—safely
To address common concerns, age verification should follow three operational principles:
- Data minimization: request only what’s needed to prove the threshold (13+, 16+, or 18+).
- Deletion: short, clear retention policies that are actually enforced, with audit logs.
- Transparency: explain why verification is required, the method, and available alternatives.
A low-friction pattern—AI age estimation with calibrated thresholds and liveness—helps validate age without hurting UX or conversion.

Didit Age Estimation for social media
Social platforms must curb underage access without turning sign-up into an obstacle. Didit’s Age Estimation strikes that balance: it estimates age with AI from a selfie (with liveness checks to prevent spoofing) and, only when confidence is low, triggers a documentary fallback. The result is a smooth onboarding for most users and strong controls for sensitive cases.
You can use Didit’s age verification at three key moments:
- During sign-up, as the first line of defense.
- Before access to higher-risk features such as DMs or groups.
- Whenever regulation requires specific thresholds.
How does age estimation work?
- Real-time selfie capture.
- AI estimates age and returns a confidence score.
- Configured thresholds determine whether to auto-approve, route to documentary fallback, or block if clearly negative.
Want to learn more? Check out our technical docs on Age Estimation.
5 benefits of Didit Age Estimation for a social network
- Frictionless UX: most users proceed with a selfie in seconds.
- Faster acceptance: fewer steps to validate age reduce drop-off.
- Secure fallback: in gray areas, automatic fallback helps you comply without penalizing legitimate users.
- Market flexibility: configure thresholds and policies by country and age, aligned with regulatory changes.
- Privacy by design: use the minimum data necessary to prove the age threshold.
Conclusion: Age verification on social media is a must-have
2025 cements the consensus: age verification on social media is essential. The winning approach combines AI age estimation + liveness as the first line with smart documentary fallback. Platforms that adopt adaptive workflows and privacy by design are best positioned to comply without sacrificing user experience.
