The introduction of the Online Safety Act 2023 has been hailed as the most advanced piece of content moderation legislation passed by the UK Parliament to date, and one of the most ambitious worldwide. Despite its initial praise, the Online Safety Act has unassumingly backslid from its liberal aspirations, cautiously over-censoring harmless online content, ‘age-gating’ the internet and de-pluralising the internet.
July 2025 saw the implementation of the second phase of the Online Safety Act (OSA), ‘The Protection of Children Codes of Practice’. The legal responsibility for content moderation will now fall upon ‘user-to-user’ sites; these platforms exist where the content produced by one user (photos, videos, music, data or any written text) can be encountered by another, either on social media sites, discussion boards or online marketplaces. The scope of ‘legal but harmful’ child content is notably broad, from self-harm, suicide and pornographic material to content that is ‘hate-driven’ and ‘related to’ violence. The act has faced vehement opposition from civil society groups such as Open Rights Group and Big Brother Watch, citing concerns of free speech, privacy and access to information restrictions. Over 540,000 Britons have signed a petition to overturn the Online Safety Act. Most surprisingly, it has managed to unite the most unlikely political bedfellows: Reform UK leader Nigel Farage and left-wing commentator Owen Jones. The Government quickly ruled out its repeal, with then Secretary of State for Science and Technology, Peter Kyle having stated that anyone against the Act was “on the side of predators”.
Age verification and 3rd-party data storage
The initial outrage at the second phase’s implementation came with the ‘age-gating’ of ‘user-to-user’ sites, where individuals are now required to verify their age before accessing sites displaying ‘harmful but legal content’ such as pornography, self-harm and suicidal content. The Act’s age verification mechanisms include facial age estimation, credit card checks, photo-ID matching, and open banking require the sharing of sensitive personal data with 3rd party age verification platforms. The new measures have disproportionately targeted small discussion sites, many of which have reluctantly closed their sites due to operational difficulties implementing the new systems and fear of non-compliance. The public has questioned how secure personal data will be from third-party sales or security breaches in light of recent high-profile cyberattacks against the Ministry of Defence and national retail giants. Within three months of Phase 2’s implementation, the scheme endured further criticism as 70,000 personal ID images uploaded to Discord’s third-party age verification service were leaked publicly, with Discord citing a ransomware attack, a breach many deemed inevitable. This raises the pressing concern that if technological safeguards are not enough to protect our information, should sites simply not collect sensitive data in the first instance?
The introduction of ‘age-gating’ the internet raises broader civil liberties concerns as adults are compelled to disclose their biometric data or personal identification to engage in life, online. This is an arguably illiberal move which inhibits one’s right to anonymity and autonomy. In practice, this choice has been further curtailed as users of sites such as X and Telegram report being denied the choice of age verification methods and forced to comply with a single, unregulated service. Wikipedia recently lost a legal challenge against the government in which it argued the OSA’s verification rules were ‘too broad and logically flawed’ as they could undermine users’ rights to privacy and safety. Normalising age verification in an open society risks turning Britain into a ‘papers, please’ state, countering Britain’s historically liberal approach to formal identification measures. The British public has historically met this with hostility and scepticism: previous attempts at identification schemes faltered as they lacked a sustained, practical purpose and evidenced government overreach through ‘function creep’.
Following the outrage at Phase 2’s implementation, the success of ‘age-gating’ pornographic material from children has been quickly overcome by a national onset in VPN usage downloads to bypass the content restriction. The BBC reported a 1,800% surge in downloads for Proton VPN after the introduction of the new controls on 25 July. The effectiveness of the scheme remains questionable when new regulations and enforcement controls can be so easily evaded by children.
‘Speech-policing’ online expression
Phase 2’s implementation has further proved controversial due to Ofcom’s requirement for sites to moderate ambiguous ‘legal but harmful’ content under new legal child safety duties. Netizens have experienced ‘age-gating’ when trying to access a range of innocuous content online, from gaming forums to political speech, with critics of the scheme citing a suppression of free speech and right to privacy. An investigation from Big Brother Watch found users blocked from trivial Reddit pages of fantasy football team names and a page dedicated to cider lovers. Ironically, a 16-year-old can go to the pub with their parents and drink a cider, but can’t access a cider-lovers’ forum unless they’re over 18. Politically charged posts on ‘X’, such as an MP’s Commons speech on grooming gangs, anti-migrant protests, satirical commentary of multicultural Britain, and Journalistic reporting on conflict in Ukraine and Gaza on Reddit, have all been flagged under ‘age-gating’ restrictions. Such over-censoring could be attributed to Ofcom’s vague categorisation of ‘legal but harmful’ content, as online sites have chosen to over-apply the law out of fear of prosecution for non-compliance. This censorship threatens not only our freedom of expression, privacy and anonymity, but the public’s right to access distressing information on behalf of the public interest. Our ability to engage in open debate online has been tainted as the threshold for content explicitly glorifying violence, and that of reporting, satire and disagreement regarding violence, has been blurred.
US websites ‘Kiwi Farm’ and ‘4Chan’ were among the first US firms to launch a legal complaint against Ofcom, arguing Ofcom’s demand for risk assessments and compliance procedure violated their 1st, 4th and 5th Amendment rights. As ‘4Chan’ has previously refused to pay Ofcom’s fines for non-compliance, questions remain as to the effectiveness of Ofcom’s extraterritorial enforcement for non-British websites, and what this means if non-compliance is simply tolerated. Will this simply make users more drawn into unregulated areas of the internet and the dark web, risking the security of British users’ data and pushing users to consume potentially more illegal and harmful content?
While the scheme initially showed promise for the protection of children online, embodying a liberal spirit, in practice, the OSA has broadly backslid from its liberal objectives by exhibiting over-censorship where individual rights to free expression, privacy and safety have been curtailed. Such a shift has been defined by firms’ interpretation of ambiguously defined legislation and subsequent fear of prosecution, whilst implementation has been felt unequally by foreign companies and smaller firms. As the public’s trust in the government polls at an all-time low of 12%, it’s not surprising that many are sceptical of the future of content moderation, as the current application of the scheme has proved controversial.
What’s next for online content moderation?
As the governments’ gradual digitisation of our everyday life encroaches closer through online content moderation, facial recognition policing and potentially Digital IDs, many question how people can trust the government to deliver Phase 3. With Ofcom’s final guidance for Phase 3 due to be finalised in late 2025 or early 2026, all eyes will be on whether the implementation of categorised services will accurately reflect the public recommendations and learn from the failures of the previous phase, where censorship has proved over-inclusive.
To ensure all Britons can safely and freely access the internet, four key measures are required moving forward. A greater refinement of sites’ obligations over defining ‘legal but harmful’ content is needed from Ofcom, alongside further recommendations of how to implement more nuance to automated moderation services to distinguish between content glorifying violent harm and harm-related content through journalism, open discussion and artwork. Improved privacy safeguards for age-verification services should be introduced and enhanced regulation over third-party data storage and usage is necessary to restore trust between users and websites. Additional support to small firms is needed to ensure compliance is financially attainable for all, ensuring the digital civic space remains pluralised.