Spain announced last week that it will ban social media access for anyone under 16, joining France and Australia in what’s fast become a highly fashionable bit of digital paternalism. Britain mulls similar restrictions. Pedro Sánchez, Spain’s prime minister, delivered the news with appropriate gravitas, decrying the “digital wild west” where children face “addiction, abuse, pornography, manipulation, violence”. The platforms, he noted, are “wealthier and more powerful than many nations, including mine”.
All true. Also beside the point.
The fixation on protecting children from social media’s worst excesses misses the rather more pressing problem that adults aren’t faring much better. The same manipulation, the same disinformation, the same algorithmic amplification of rage and nonsense that allegedly warps young minds works just as effectively on their parents. Possibly more so.
Everyone’s vulnerable
Romania’s 2024 presidential election was cancelled after TikTok bots allegedly inflated the reach of Călin Georgescu, a hitherto obscure candidate who surged from nowhere to lead the first round. Romanian voters aren’t 15. They’re adults, supposedly equipped with the critical thinking skills and life experience to see through manufactured hype. They voted anyway, in sufficient numbers to force the hand of the country’s constitutional court.
That wasn’t an isolated incident. Deepfakes circulated across WhatsApp in India ahead of elections. Facebook’s algorithms in Myanmar reportedly amplified hate speech that contributed to ethnic cleansing. Though no teenager would these days be seen dead on Facebook it remains the most-used social network in the world, with more than three billion active users. Russian disinformation floods European feeds during any crisis. The 2024 American presidential campaign demonstrated how easily coordinated networks could shape political discourse. None of these involved children. All involved platforms optimised for engagement over accuracy, profit over public good.
The threat social media poses to democracy doesn’t stem from teenagers sharing Andrew Tate posts, worrying though that is. It stems from platform business models that reward outrage, from algorithms that mistake virality for truth, from the sheer difficulty of distinguishing authentic speech from coordinated manipulation at scale. Age verification won’t fix any of this.
Bans don’t work anyway
Even accepting the premise that children need special protection, bans are the wrong instrument. Determined teenagers will circumvent age restrictions the same way they’ve always obtained alcohol and cigarettes, through older siblings, fake IDs, VPNs, or simple parental indifference. Australia’s ban includes criminal penalties for platforms that fail to prevent access. Enforcing this will prove either impossibly expensive or trivially easy to evade, probably both.
France’s approach demands age verification, which introduces privacy concerns possibly worse than the original problem. Requiring platforms to collect identification documents creates honeypots for data breaches and normalises surveillance infrastructure that governments will inevitably repurpose. Trading teenage mental health for mass digital ID systems seems an odd bargain.
The theatrical nature of these bans suggests governments know they won’t work but need to be seen doing something. Arguably, that’s fair enough. Voters demand action when platforms appear out of control. But symbolic gestures distract from real reform.
What’s actually needed
Serious regulation targets the platforms themselves, not their users. The EU’s Digital Services Act is a step in the right direction, putting in place transparency requirements for algorithmic recommendations, swift takedown mechanisms for illegal content, limits on targeted advertising, and hefty fines for non-compliance. It’s certainly not perfect, but it’s at least aimed at the entities with actual power to change behaviour.
Platforms should face liability for amplifying demonstrably false information, especially when it’s coordinated and scaled through bot networks. They should be required to label AI-generated content clearly. Their recommendation algorithms should be transparent and auditable. Political advertising should carry the same requirements as traditional media. None of this requires checking the date on someone’s birth certificate.
The comparison Sánchez drew, that social media platforms are wealthier than nations, cuts deeper than he perhaps intended. If social media companies wield power comparable to states, they ought to face comparable accountability. Democracies don’t let unelected entities manipulate public discourse without oversight simply because they’re profitable. Or they shouldn’t.
Infantile priorities
Protecting children from online harm matters. But so does protecting elections, public health information, and basic civic discourse from systematic manipulation. The current rush to ban teenagers from platforms while leaving the underlying problems untouched suggests governments prefer easy wins to hard fights.
Romania’s cancelled election should concentrate minds. When bot-amplified candidates can hijack democratic processes, the problem isn’t that some voters are too young. It’s that the entire system is too vulnerable. Age-gating social media is rather like installing child locks on car doors whilst ignoring the drunk driver at the wheel.
The digital wild west that Sánchez decried won’t be tamed by keeping children off the range. It requires actual sheriffs, actual laws, actual consequences for the platforms that profit from chaos. Until governments summon the will to impose those, banning teenagers is little more than political theatre, and leaves everyone else exposed.
Time for reinvention
Nearly 30 years have passed since Andrew Weinreich launched SixDegrees.com in 1997, widely considered the first true social network. Three decades later, the fundamental architecture remains remarkably similar: attention-harvesting algorithms, advertising-funded business models, engagement metrics that reward provocation over substance. For an industry that prides itself on disruption, social media has proved remarkably resistant to meaningful evolution.
Reinvantage exists precisely because established industries sometimes need forcing past obsolete models. The social media sector is ripe for such reinvention. What might that look like? Platforms where users, not advertisers, are the customers. Algorithms optimised for accuracy and constructive discourse rather than viral rage. Transparent content moderation with genuine accountability. Federated networks where no single entity is in control.
The question is whether anyone with sufficient capital and ambition will bothers to build them, or whether we’ll spend another 30 years applying regulatory sticking plasters to a model that was broken from the start. Banning teenagers won’t answer that question. Building something better might.
Photo: Dreamstime.






