Is It Time to Ban Kids From Social Media?

With the debate over social media’s impact on children reaching a fever pitch, we sat down with Desiree Sainthrope, a legal expert specializing in global compliance and technology regulation. Drawing on her extensive experience with complex international agreements, she offers a sharp analysis of the push for a U.S. youth social media ban, similar to Australia’s recent landmark law. We’ll explore the immense technical challenges of enforcing age restrictions, the chaotic legal landscape created by state-by-state rules, the deep generational divide in public opinion, and how we might actually measure the success of such a monumental policy shift.

Australia’s law requires firms to take “reasonable steps” to block users under 16. What specific, step-by-step technological and operational hurdles would a company like TikTok or YouTube face in implementing such a system in the U.S., and what metrics would they use to prove compliance?

The term “reasonable steps” is a lawyer’s playground and a technologist’s nightmare. It’s intentionally vague. The first hurdle is verification. How do you prove a user is under 16 without creating a privacy catastrophe? You could require government ID uploads, but that’s a honeypot for data thieves. Facial age estimation technology is another option, but it’s notoriously unreliable, especially for minors and across different demographics. Operationally, you’d need a massive, multi-lingual moderation and support team to handle the inevitable appeals from users who are wrongly blocked. To prove compliance, a company would have to produce extensive audit trails showing their verification processes, the number of underage accounts identified and removed, and their false positive rate. They’d likely be reporting these figures quarterly to a federal agency, all while fighting off lawsuits from every direction. It’s an incredibly complex and expensive undertaking.

We see varied state actions, like California’s warning labels and Maryland’s stalled data privacy “Kids Code.” How do these piecemeal approaches compare to a sweeping federal ban, and what specific legal conflicts or practical gaps could emerge from this patchwork of regulations?

This patchwork approach is, from a compliance standpoint, arguably worse than a straightforward federal ban. It creates a messy and contradictory legal minefield for companies to navigate. Imagine a platform having to implement a health warning label for a user in California, while applying different data collection limits for a user in Maryland, and also ensuring a child influencer in Illinois is properly compensated. It’s untenable. The most significant legal conflict will be federal preemption. If a federal law is passed, like the one that would ban kids under 13, does it override California’s or Maryland’s tougher state-level rules? This will undoubtedly end up in court. The biggest practical gap is the creation of digital haves and have-nots, where a child in one state receives more robust online protections than a child just across the border.

A Quinnipiac poll found nearly 60% support for a ban, but that number dropped among younger adults. What specific cultural or technological factors drive this generational divide, and what kind of messaging would be needed to build a broader consensus for such a significant policy shift?

The generational divide is completely understandable. For many adults over 40, social media is an add-on, something that came into their lives later. They see it through the lens of public health threats they grew up with. For anyone under 35, and especially for today’s teens, these platforms aren’t just apps; they are the social infrastructure. It’s where friendships are formed, cultural trends are born, and identities are explored. To them, a ban isn’t about safety; it’s about social amputation. To build a broader consensus, the messaging has to move away from prohibition and toward empowerment. Instead of saying, “We’re banning this to protect you,” a more effective message would be, “We’re redesigning these platforms to give you control over your data, your time, and your mental well-being.” It has to be framed as an upgrade to their digital world, not an eviction from it.

Tech companies criticized Australia’s ban as “short-sighted.” Can you break down their most compelling arguments against these age restrictions, and based on past regulatory fights, what kind of legal or lobbying strategy do you expect them to deploy if a federal U.S. ban gains momentum?

Their most compelling argument, and the one they’ll lean on heavily, is that a ban is a blunt instrument that won’t solve the underlying problem. They’ll argue it pushes kids to less-moderated corners of the internet where real dangers lurk, and that it infringes on First Amendment rights to access information. They’ll also point to the immense technical difficulty of foolproof age verification without creating a surveillance state. If a federal ban gains momentum, expect a two-front war. Publicly, they’ll launch a massive PR campaign about free speech, innovation, and connecting the world. Privately, their lobbyists will be all over Capitol Hill, armed with data about economic impact and arguing that collaboration on safety tools, not a ban, is the answer. They will absolutely fund and orchestrate legal challenges, likely teaming up with civil liberties groups to fight the law in court the moment it’s signed.

What is your forecast for the future of youth social media regulation in the United States?

I foresee a messy, drawn-out battle over the next five years. We won’t see a clean, sweeping federal ban like Australia’s anytime soon; the political and legal hurdles are just too high in the U.S. Instead, we’ll see a continued proliferation of state-level laws, creating more of the compliance chaos we’ve discussed. This will put immense pressure on Congress to act, not necessarily with an outright ban, but with a national privacy and safety standard, perhaps raising the minimum age to 13 or 14 and setting strict rules on data collection and algorithmic amplification for minors. The tech companies will fight it tooth and nail, but ultimately, the bipartisan political pressure, fueled by parental concern, will force them to the negotiating table. The final result will likely be a federal compromise that satisfies no one completely but establishes a national floor for children’s online safety.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later