OpenAI Backs California Measure to Protect Children From AI

California is a battleground for the future of artificial intelligence, where a clash between legislative action and direct democracy is shaping how the next generation will interact with emerging technologies. To navigate this complex landscape, we turn to Desiree Sainthrope, a legal expert whose work sits at the nexus of global compliance, intellectual property, and cutting-edge tech policy. With a ballot measure backed by OpenAI now on the table, the state is debating not just what the rules for AI should be, but who should write them—and how permanent they should become. Our discussion will explore the strategic choices behind pursuing a ballot initiative, the specific child safety protections at stake, the critical debate over entrenching tech law in the state constitution, and the conspicuous silence from the rest of the tech industry.

With efforts to regulate AI happening in both the Legislature and through a proposed ballot measure, what are the distinct advantages of taking this issue directly to voters? Could you walk us through the potential risks or trade-offs of this public-facing strategy?

Taking this fight to the ballot is a power move, plain and simple. It’s an attempt to bypass the deliberative, and often slow, legislative process, which involves countless hearings and stakeholder negotiations. When the proponents say they’ll “pursue every measure,” they’re signaling an urgency that they feel the Legislature can’t match. The key advantage is creating a clear, direct mandate from the people that can’t be easily ignored or watered down by industry lobbyists in Sacramento. However, the risk is immense. You lose the nuance of the legislative process. Figures like Senator Padilla have rightly pointed out that this isn’t a collaborative, public process with all stakeholders at the table. It becomes a high-stakes, expensive campaign, and the final text can be a blunt instrument rather than a carefully sculpted piece of policy.

This proposal builds on a recent law targeting “companion” chatbots and suicidal ideation. What specific new protections does it introduce beyond current requirements, and how might it address issues like age-verification or bans on AI-powered toys being discussed by lawmakers?

The law Governor Newsom signed last year was a critical first step, forcing companies to address the most severe harms, like suicidal ideation, within “companion” chatbots. This new proposal aims to build a much higher wall of protection around young users. While the ballot measure’s exact text is still being finalized, it exists within a larger policy conversation that gives us a clear sense of the destination. Lawmakers are already working on very concrete measures, such as imposing strict age-check requirements on chatbot platforms and even considering a temporary ban on AI-powered toys for children under 13. This ballot initiative is a vehicle to potentially enact these kinds of robust protections much faster than the typical legislative timeline would allow, effectively taking these ideas from committee hearings and putting them directly into law.

Some critics worry that writing AI regulations into the state Constitution could make them hard to update as technology evolves. How does this proposal balance establishing strong protections with the need for future flexibility? Please share specific examples of how it could be revised.

This is the central, most fraught issue with the initiative, and frankly, the balance is precarious. The proponents want to lock in protections, making them difficult to repeal. But as Senator Padilla warned, putting this into the Constitution creates an “unnecessarily high bar to revise.” The technology is moving at a dizzying pace; a rule that seems sensible today for a chatbot could be completely irrelevant or even counterproductive for the AI tools of tomorrow. To revise a constitutional amendment, you can’t just pass a new bill. You would almost certainly have to go back to the voters with another ballot measure, which is an arduous and costly undertaking. This isn’t like a normal law that the Legislature can tweak in the next session as the technology evolves. It’s a trade-off between permanence and agility, and in a field this dynamic, sacrificing agility is a profound risk.

While OpenAI is backing this initiative, other major AI companies haven’t yet taken a public stance. What might be causing this hesitation from the broader tech industry, and what specific outreach or compromises could help build a wider coalition of support?

The silence from other major AI players is deafening, and it speaks volumes. They are likely in a “wait and see” mode, carefully calculating the political winds. Committing to a specific ballot measure, especially one that could be seen as being shaped by a competitor like OpenAI, is a risky bet. Many companies prefer the more controlled environment of legislative negotiation, where their lobbyists can work behind the scenes to shape the details. We know from Mr. Steyer that discussions with other tech companies are happening, but their lack of public support suggests they are not yet convinced. To build a broader coalition, the initiative’s backers need to prove this isn’t just a bespoke rulebook for one company. They need to demonstrate that this framework is a stable, predictable, and reasonable path forward for the entire industry, and perhaps more importantly, that it’s a better alternative than the patchwork of more aggressive bills that could emerge from the Legislature.

What is your forecast for the future of AI regulation for children in California and beyond?

My forecast is that robust regulation is no longer a question of if, but how and how soon. The dual-track approach we’re seeing in California—a legislative push running parallel to a direct-to-voters initiative—signals an undeniable momentum that will not be stopped. California often sets the regulatory tempo for the rest of the nation, and this issue is no different. Whether it’s through this specific ballot measure or a suite of bills from lawmakers like Assemblymember Bauer-Kahan, the state will establish a comprehensive new set of rules for AI and child safety. The ultimate outcome here, be it a rigid constitutional mandate or a more flexible legislative framework, will create a powerful blueprint. Expect to see other states, and eventually the federal government, looking very closely at California’s playbook as they begin to grapple with these same urgent questions.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later