Didit
Sign upGet a Demo
UK age verification: what the law requires, “highly effective” methods, and how to implement them with minimal friction
September 22, 2025

UK age verification: what the law requires, “highly effective” methods, and how to implement them with minimal friction

#network
#Identity

Key takeaways (TL;DR)
 

The Online Safety Act 2023 requires “highly effective” adult-only access to protect minors; Ofcom oversees compliance.

Non-compliance penalties: up to 10% of global revenue or £18m, plus potential UK service blocking.

Self-attestation (“I’m 18+”) is not enough; techniques such as age estimation backed by document + liveness and binary identity integrations are expected.

Recommended pattern: low-friction primary method (age estimation) + documentary fallback for uncertain cases; privacy by design and continuous measurement.

 


 

Age verification in the United Kingdom is no longer optional or cosmetic: starting in 2025, platforms operating in the UK must stop minors from accessing harmful content and prove it using “highly effective” methods, under the supervision of Ofcom, the national communications regulator.

Failure to comply can lead to fines of up to 10% of global revenue and up to £18 million, plus service blocking and reputational damage. As a result, companies operating in the UK will face growing scrutiny over how they handle personal data and how smooth their age checks are.

This guide helps your product, legal, and compliance teams understand the new rules and the trade-offs to consider when choosing and implementing an age-verification solution that reduces friction and speeds up acceptance.

What changed with the Online Safety Act and when it applies

The Online Safety Act 2023 sets a clear duty for platforms: prevent minors from accessing harmful content and demonstrate it using measures that are “highly effective” at restricting access to adults. Roll-out is staged: for Part 5 (services that publish their own pornography), obligations started on January 17, 2025; for Part 3 (user-to-user and search), child access assessments were due by April 16, 2025 and, from July 25, 2025Age Verification Dayall services that allow pornography must run robust age checks in production. On the same day, Ofcom began supervisory checks and the first investigations.

This framework is backed by children’s protection codes and Ofcom guidance (January–April 2025), which clarify expectations on effectiveness, proportionality, and privacy. Self-declaration (“Yes, I’m 18”) is out; technical evidence is in: AI-based biometric age estimation, document verification with facial matching (1:1) and liveness, or identity integrations that return a binary yes/no with minimal data transfer, such as identity wallets. These techniques are acceptable when they are reliable, robust, and continuously monitored.

If you fail to comply, Ofcom can impose fines of up to 10% of global revenue or £18 million and can also seek service blocking in the UK. The regulator expects risk-based decisions, documentation, effectiveness metrics, and a privacy-by-design implementation that doesn’t turn verification into a bottleneck.

Who must check age? The real scope (beyond porn)

The duty isn’t limited to “adult sites.” Since July 25, 2025, any service that publishes or allows pornography—whether owned content or user-generated—must deploy “highly effective” age controls to stop minors. The framework distinguishes two cases:

  • Part 5 (pornography providers). Services that publish their own pornographic content: in scope since January 17, 2025.
  • Part 3 (user-to-user and search). Social networks, communities, forums, messaging, and search engines: required to complete child access assessments by April 16, 2025 and, where there’s a risk of exposure to harmful content (including pornography), apply proportionate measures, including age assurance.

In practice, the scope includes UGC platforms with 18+ sub-forums or channels, streaming services with community spaces where such content might surface, search engines that index and present pornographic results to UK users, and even generative AI tools that publish sexually explicit material within the service. Wherever there’s a reasonable risk of exposure, Ofcom expects systems and processes that prevent that exposure—warnings alone aren’t enough.

Beyond pornography, Part 3 children’s codes require managing risks around self-harm, suicide, eating disorders, and other harms to minors. Here, age assurance sits alongside design and moderation measures (e.g., limiting DMs from strangers, tuning recommendations, enabling safe search by default, and parental controls), proportionate to risk.

Methods to verify age: how to choose by risk, privacy, and UX

Official guidance recognizes several “highly effective” methods that can be combined as defense-in-depth:

  • Facial age estimation. Uses biometrics and AI to predict a user’s age range without collecting more information or identifying the person. Low friction.
  • Document + biometrics. Confirms adulthood through document validation and biometrics (1:1 Face Match and liveness). Higher friction and sensitive-data handling.
  • Credit card. A proxy for access to adult-only means. Limited coverage.
  • Digital identity (via identity wallet). Enables a binary adult/not-adult response with minimal data sharing. Strong privacy by design posture.
  • Telco or email signals. Useful as complementary signals, not sufficient on their own.

Selection should balance risk level, content context, jurisdiction, privacy, user acceptance, and total cost.

Privacy first: comply without storing sensitive data

The UK government and Ofcom emphasize proportionality and data minimization. Practically:

  • Don’t store biometrics or documents unless necessary.
  • Define minimal retention with encryption, segregation, and access controls.
  • Run DPIAs (Data Protection Impact Assessments), maintain records of processing, and assess vendors.
  • Clear user messaging: why verification is needed, what’s processed, and for how long.

The goal is to demonstrate compliance and build trust without adding friction.

Impact and controversy: what to expect in the market

Post-go-live, government messaging points to a meaningful shift in how minors interact with the internet, with age checks spreading to more surfaces and algorithms tuned to reduce exposure to harmful content. In parallel, there’s been increased VPN use to bypass controls; the regulatory mandate stresses platforms must prevent foreseeable bypasses aimed at minors and avoid promoting workarounds. Debate continues between NGOs welcoming stronger protections and privacy/free-speech advocates demanding proportionality and transparency. For businesses, the takeaway is clear: practical, auditable, privacy-respectful compliance.

Didit Age Estimation: low-friction verification with a safe fallback

Didit Age Estimation uses biometrics to estimate a user’s age with very low friction and, when uncertainty is detected, triggers a fallback (document + biometrics) to strengthen assurance.

Didit’s approach prioritizes:

  • Frictionless UX. Resolve age checks in seconds, minimizing drop-off.
  • Faster acceptance. Fewer steps and lower friction increase verification completion.
  • Secure fallback. Handles gray areas with stronger checks, balancing privacy and risk.
  • Privacy by design. Architecture that minimizes captured data and returns binary answers when appropriate.

For integration, Didit lets you start verifying your users in minutes via no-code verification links or APIs, giving you flexible building blocks. This is especially effective where conversion is mission-critical and every extra step hurts the business.

Explore the technical details of Age Estimation in our documentation.

Conclusion: age verification in the UK that changes the game

The new framework demands measurable outcomes: minors kept away from harmful content without sacrificing privacy or hurting UX. The winning formula combines low-friction methods with high-assurance fallbacks, observability, and evidence. Didit’s Age Estimation aligns with this approach: it reduces friction, speeds up acceptance, and adds guarantees when needed—privacy by design from the ground up.

UK age verification: stay compliant without hurting conversion

Meet the UK age verification requirements of the Online Safety Act with Didit’s Age Estimation technology. Low-friction by design, with a document-based fallback whenever extra assurance is needed. Launch today and start verifying user age across your flows.


Frequently Asked Questions

UK age verification — Key questions for compliance and founders

Part 5 (providers publishing their own pornography): January 17, 2025. Part 3 (user-to-user and search): child access assessment by April 16, 2025 and measures —including age assurance where appropriate— from July 25, 2025.
Solutions that can reliably determine if a user is a minor, with liveness, anti-spoofing controls, continuous measurement, and bias mitigation. Includes age verification and age estimation.
No. Self-attestation is no longer acceptable: technical evidence is required.
DPIAs, design decisions, metrics (pass rate, false positives/negatives, verification time), incident logs, and effectiveness reports.
They do not remove your duty. Regulators expect prevention of reasonably foreseeable bypasses and action on evasion patterns.
It depends on the method. Low-friction flows like age estimation tend to reduce drop-off — that’s why the primary + fallback pattern works.
For UGC or unpredictable content: age estimation as the primary method with documentary fallback for doubtful cases. For one-off purchases: card/open banking as an additional signal. For inherently high-risk scenarios (pornography published by the service), documentary verification may be predominant.
Through minimization, binary responses where possible, limited retention, encryption, and user transparency — plus DPIAs and rigorous vendor selection.

UK age verification: what the law requires, “highly effective” methods, and how to implement them with minimal friction

Didit locker animation