The digital landscape for young people is being redrawn, state by state, in a high-stakes battle pitting child protection against free speech. At the heart of this conflict are new age-verification laws, with states like Texas and California championing starkly different models for regulating how minors access the online world. To navigate this complex terrain, we spoke with Desiree Sainthrope, a legal expert renowned for her work at the intersection of technology, law, and policy. She offers a sharp analysis of the competing legal philosophies, the profound privacy implications of these new rules, and the inevitable constitutional showdown that looms over Big Tech, children, and the future of online expression.
The legislative approaches in Texas and California seem to be at odds, with one targeting app stores and the other device makers. Could you break down the practical difficulties and privacy concerns each model creates, and shed light on why the tech industry appears to favor California’s path?
Of course. The two models represent fundamentally different philosophies of regulation, and the operational consequences are vast. The Texas model places the legal liability squarely on the shoulders of app stores like those run by Google and Apple. Imagine the sheer logistical challenge: for every single app download, they would be responsible for verifying the user’s age and securing parental consent. This creates a massive, constant friction point in their business model and, more critically, a privacy nightmare. As Apple warned in its letter to Congress, it would force millions of adults to surrender private information just to perform a simple, everyday task. It’s a clunky, invasive, and continuous process.
California’s approach, which puts the onus on device manufacturers, is seen by the tech lobby as the lesser of two evils. While it’s not without its own complexities, it centralizes the verification process at the device level, often during initial setup. This is a one-time event rather than a per-transaction hurdle. It integrates more smoothly with existing parental control frameworks that manufacturers already have in place. For tech companies, this is a far more manageable compliance burden. It contains the problem at the hardware level, rather than turning their global, fluid marketplaces—the app stores—into heavily guarded checkpoints.
This debate is often framed as a clash between two competing legal principles: one side, represented by figures like Casey Stefanski, emphasizes that minors can’t legally enter contracts, while the other raises alarms about First Amendment rights, particularly for vulnerable youth. From your legal perspective, which arguments and precedents do you see as most decisive as these cases move through the courts?
This is the absolute core of the legal fight, and it’s a classic balancing act between established contract law and fundamental constitutional rights. The argument that minors cannot be bound by lengthy and complex terms-of-service agreements is incredibly strong; it’s a bedrock principle of contract law designed to protect them. Courts have affirmed this for generations. Proponents of these laws will argue that an app download is primarily a contractual event, and the state has a compelling interest in ensuring parents are involved.
However, the First Amendment argument is equally, if not more, potent in the digital age. The key will be how the courts define the act of downloading an app. Is it a commercial transaction, or is it an act of accessing information and speech? Lawyers like Adam Sieff are correctly arguing that children, as full-fledged Americans, have constitutional rights that are nearly as extensive as those of adults. For an LGBTQ teen in a hostile environment, an app might be their only lifeline to a supportive community. Courts will have to weigh whether these laws are a reasonable safeguard or an unconstitutional “prior restraint” on speech that blocks access to a world of information. The precedents that will be most critical are those dealing with students’ rights to access books in a library or information online, which have historically leaned toward protecting access to speech, even for minors.
Joel Thayer has drawn a direct line from this issue to the rise of AI, expressing fear that companies will scrape children’s data to train their models. Can you walk us through how an age-verification law would technically function as a barrier to that data pipeline, and what we might look for to see if it’s actually working?
That’s a very insightful and increasingly urgent concern. The data pipeline right now is like an open fire hydrant. When a child uses an app, their clicks, their time spent, their social interactions, and sometimes even their location data are collected and funneled into massive datasets. These datasets are the lifeblood for training AI models to understand and predict human behavior. An age-verification and parental consent law acts as a crucial valve on that hydrant.
First and foremost, it creates an explicit barrier to entry. Before a child’s account can even be created, a verifiable parent must grant permission. This immediately stops the data collection for any child who doesn’t get that consent. Second, for those who do, these laws typically come with stringent data minimization requirements. This means companies are legally obligated to collect only the data that is strictly necessary for the app to function, and they are often barred from using it for other purposes, like training broad AI models. To measure effectiveness, you could conduct audits to see if data from accounts flagged as belonging to minors is properly segregated and excluded from AI training sets. A sharp drop in the number of active user profiles identified as belonging to children within large-scale analytics would be another clear metric. The goal is to choke off that unrestricted flow of data at its source.
The Supreme Court’s decision in Paxton, which upheld age verification for pornographic websites, is frequently mentioned. How might the legal logic from that case be applied to the Texas app store law, and what are the key distinctions that lawyers challenging these laws will need to highlight?
The Paxton decision is a critical piece of the puzzle, but it’s not a simple copy-and-paste precedent. Supporters of the Texas law will argue that the core principle is the same: the state has a compelling interest in protecting minors from harmful content. They will say that just as the state can regulate access to pornography, it can regulate access to other potentially harmful digital content or exploitative business practices found in apps. That’s the line of reasoning they’ll push.
However, the distinction is enormous, and this is what the tech lobby’s lawyers will hammer home. The Paxton case dealt with a very narrow, specific category of speech—pornography—that has long been subject to a lower level of First Amendment protection, especially concerning children. An app store, by contrast, is not a monolith of harmful content; it’s a modern public square. It’s a gateway to everything from the New York Times app to educational games to mental health resources. Applying the same legal reasoning used for a pornographic website to an entire ecosystem of protected speech is a breathtakingly overbroad application of that precedent. The key legal argument will be that while the state can regulate access to specific, narrowly defined categories of harmful content, it cannot put a gatekeeper in front of the entire library of digital information. That transforms a targeted regulation into a sweeping restriction on all speech.
What is your forecast for the future of state-level age-verification laws, considering the competing legislative models and the inevitable Supreme Court showdown?
My forecast is for a period of intensified legal and legislative chaos before we see any clarity. We’re going to see more states jumping into the fray, with some following the Texas model and others adopting California’s approach. This will create a fractured, unworkable patchwork of regulations across the country—a compliance nightmare for any company operating nationally. This kind of state-level fragmentation is ultimately unsustainable for a global industry. This is precisely the type of situation that compels the Supreme Court to intervene. I expect that within the next few years, the Court will take up one of these cases to resolve the circuit splits and, as Joel Thayer colorfully put it, “triangulate exactly what the eff is going on with the First Amendment in the tech world.” I predict they will likely strike down the broader, more restrictive laws like the one in Texas as unconstitutional, but they may leave the door open for more narrowly tailored regulations. Ultimately, this messy state-by-state fight will build immense pressure on Congress to finally pass a comprehensive federal privacy and safety law for children, creating a single, clear standard for the entire country.