Key takeaways (TL;DR)
The Online Safety Act 2023 requires “highly effective” adult-only access to protect minors; Ofcom oversees compliance.
Non-compliance penalties: up to 10% of global revenue or £18m, plus potential UK service blocking.
Self-attestation (“I’m 18+”) is not enough; techniques such as age estimation backed by document + liveness and binary identity integrations are expected.
Recommended pattern: low-friction primary method (age estimation) + documentary fallback for uncertain cases; privacy by design and continuous measurement.
Age verification in the United Kingdom is no longer optional or cosmetic: starting in 2025, platforms operating in the UK must stop minors from accessing harmful content and prove it using “highly effective” methods, under the supervision of Ofcom, the national communications regulator.
Failure to comply can lead to fines of up to 10% of global revenue and up to £18 million, plus service blocking and reputational damage. As a result, companies operating in the UK will face growing scrutiny over how they handle personal data and how smooth their age checks are.
This guide helps your product, legal, and compliance teams understand the new rules and the trade-offs to consider when choosing and implementing an age-verification solution that reduces friction and speeds up acceptance.
The Online Safety Act 2023 sets a clear duty for platforms: prevent minors from accessing harmful content and demonstrate it using measures that are “highly effective” at restricting access to adults. Roll-out is staged: for Part 5 (services that publish their own pornography), obligations started on January 17, 2025; for Part 3 (user-to-user and search), child access assessments were due by April 16, 2025 and, from July 25, 2025 —Age Verification Day— all services that allow pornography must run robust age checks in production. On the same day, Ofcom began supervisory checks and the first investigations.
This framework is backed by children’s protection codes and Ofcom guidance (January–April 2025), which clarify expectations on effectiveness, proportionality, and privacy. Self-declaration (“Yes, I’m 18”) is out; technical evidence is in: AI-based biometric age estimation, document verification with facial matching (1:1) and liveness, or identity integrations that return a binary yes/no with minimal data transfer, such as identity wallets. These techniques are acceptable when they are reliable, robust, and continuously monitored.
If you fail to comply, Ofcom can impose fines of up to 10% of global revenue or £18 million and can also seek service blocking in the UK. The regulator expects risk-based decisions, documentation, effectiveness metrics, and a privacy-by-design implementation that doesn’t turn verification into a bottleneck.
The duty isn’t limited to “adult sites.” Since July 25, 2025, any service that publishes or allows pornography—whether owned content or user-generated—must deploy “highly effective” age controls to stop minors. The framework distinguishes two cases:
In practice, the scope includes UGC platforms with 18+ sub-forums or channels, streaming services with community spaces where such content might surface, search engines that index and present pornographic results to UK users, and even generative AI tools that publish sexually explicit material within the service. Wherever there’s a reasonable risk of exposure, Ofcom expects systems and processes that prevent that exposure—warnings alone aren’t enough.
Beyond pornography, Part 3 children’s codes require managing risks around self-harm, suicide, eating disorders, and other harms to minors. Here, age assurance sits alongside design and moderation measures (e.g., limiting DMs from strangers, tuning recommendations, enabling safe search by default, and parental controls), proportionate to risk.
Official guidance recognizes several “highly effective” methods that can be combined as defense-in-depth:
Selection should balance risk level, content context, jurisdiction, privacy, user acceptance, and total cost.
The UK government and Ofcom emphasize proportionality and data minimization. Practically:
The goal is to demonstrate compliance and build trust without adding friction.
Post-go-live, government messaging points to a meaningful shift in how minors interact with the internet, with age checks spreading to more surfaces and algorithms tuned to reduce exposure to harmful content. In parallel, there’s been increased VPN use to bypass controls; the regulatory mandate stresses platforms must prevent foreseeable bypasses aimed at minors and avoid promoting workarounds. Debate continues between NGOs welcoming stronger protections and privacy/free-speech advocates demanding proportionality and transparency. For businesses, the takeaway is clear: practical, auditable, privacy-respectful compliance.
Didit Age Estimation uses biometrics to estimate a user’s age with very low friction and, when uncertainty is detected, triggers a fallback (document + biometrics) to strengthen assurance.
Didit’s approach prioritizes:
For integration, Didit lets you start verifying your users in minutes via no-code verification links or APIs, giving you flexible building blocks. This is especially effective where conversion is mission-critical and every extra step hurts the business.
Explore the technical details of Age Estimation in our documentation.
The new framework demands measurable outcomes: minors kept away from harmful content without sacrificing privacy or hurting UX. The winning formula combines low-friction methods with high-assurance fallbacks, observability, and evidence. Didit’s Age Estimation aligns with this approach: it reduces friction, speeds up acceptance, and adds guarantees when needed—privacy by design from the ground up.