With states like New York and California now stepping into the void of federal AI oversight, we are witnessing the birth of a new regulatory landscape in real time. To unravel what this means for the tech industry and public safety, we sat down with Desiree Sainthrope, a legal expert whose work at the intersection of global compliance and emerging technology gives her a unique vantage point on these developments. We discussed the nuanced differences in this emerging state-led regulatory framework, the intense political battles that shaped New York’s new law, and the practical impact these rules will have on the world’s most powerful AI models.
The RAISE Act is being called stricter than California’s SB 53. Could you help us understand what that really means for a company on the ground, say for a developer like OpenAI? What are the key operational differences they’ll face beyond just the penalty caps?
Absolutely. While California’s law gives developers a 15-day window to report a critical safety incident after it happens, New York’s framework is fundamentally about prevention. The emphasis on avoiding harms like the creation of bioweapons implies a proactive, pre-deployment burden. For a company like OpenAI, this isn’t just about having a response plan; it’s about building a rigorous, documented safety testing protocol before a new version of ChatGPT ever sees the light of day. They’ll have to prove they’ve thought through the worst-case scenarios and implemented safeguards, which is a much higher and more complex bar than California’s more reactive reporting requirement. It’s a shift from ‘what do you do when something breaks?’ to ‘prove to us it won’t break in a catastrophic way.’
Governor Hochul has framed this as New York and California creating a ‘unified benchmark’ in the absence of federal action. From your perspective, what are the real-world consequences of this state-by-state approach for a company trying to operate nationwide? Are we heading towards a confusing and conflicting web of regulations?
That’s the core challenge. While calling it a “unified benchmark” is politically savvy, the reality for companies like Google or Microsoft is that they now face a patchwork of compliance demands. Even small differences in reporting timelines, safety testing definitions, or penalty structures create immense operational and legal friction. Imagine their compliance teams trying to create a single internal process that satisfies both New York’s preventative mandate and California’s reporting rules, while also anticipating what a third or fourth state might do. This state-led approach, born of federal inaction, forces companies to navigate a fragmented landscape, which increases costs, slows down deployment, and can ultimately lead to a more conservative, risk-averse approach to innovation as they try to comply with the strictest elements of every state’s law.
The bill’s sponsors mentioned defeating ‘last-ditch attempts from AI oligarchs.’ Based on your experience with tech policy, could you shed some light on the kinds of arguments the tech industry likely used to oppose the RAISE Act, and why the counter-arguments for robust safety measures ultimately won the day in Albany?
The industry playbook typically revolves around a few key arguments: that state-by-state regulation stifles innovation, that a national standard is preferable to a confusing patchwork, and that overly prescriptive rules will harm their ability to compete globally. You can almost hear the conversations in the halls of Albany about a “Wild West for AI” being preferable to burdensome red tape. However, lawmakers like Bores and Gounardes effectively countered this by shifting the emotional center of the debate. They framed the issue not as one of economic competition, but of fundamental public safety, using powerful, visceral examples like bioweapons. The argument that “big tech oligarchs think it’s fine to put their profits ahead of our safety” proved incredibly persuasive because it tapped into a growing public anxiety about the unchecked power of these technologies.
New York’s law is designed to prevent significant harms. Could you walk us through what the new safety testing and reporting protocols might look like in practice for a developer like Anthropic before it can release a new version of its Claude model in New York?
For a company like Anthropic, which prides itself on safety, this codifies and likely expands its internal processes into a legal obligation. Before a public release, their teams would have to conduct extensive, documented “red-teaming” exercises, actively trying to force the model to generate dangerous outputs. They would need to create a detailed report outlining the model’s capabilities, particularly any that could be weaponized, and the specific safeguards they’ve engineered to prevent misuse. This isn’t a simple checklist; it’s a comprehensive safety case that must be submitted to a state authority for review. The feeling inside that development team must be one of immense pressure, knowing that their work isn’t just being judged by the market, but by regulators tasked with preventing worst-case scenarios.
The final version of the RAISE Act seems to have been softened from the draft passed in June, which had staggering penalties of up to $30 million. What can we infer about the compromises made during those final negotiations? What kind of industry concerns or practical realities likely led lawmakers to pull back from those initial, much harsher figures?
That change signals a classic legislative compromise. The initial figures—$10 million for a first offense and $30 million for subsequent ones—were likely an opening negotiating position to signal seriousness. The tech industry almost certainly pushed back hard, arguing that such astronomical fines could be existential threats, especially for any smaller but still powerful model developers, and that they were disproportionate. Lawmakers probably realized that while they needed penalties with real teeth, a more moderate figure would make the bill more durable against legal challenges and appear more “sensible,” as Governor Hochul put it. The final law is still stricter than California’s $1 million cap, but pulling back from the $30 million figure shows a pragmatic negotiation that balanced the goal of accountability with the economic realities of the industry.
What is your forecast for the future of AI regulation in the United States?
My forecast is for a period of escalating complexity before we see any federal clarity. We will see a handful of other major states follow the New York and California model, each with its own unique twist, creating an increasingly difficult regulatory maze for the AI industry. This growing state-level friction will be the primary catalyst that eventually forces a reluctant Congress to act. The pressure from tech giants, who will find it unsustainable to operate under a dozen different rulebooks, will become immense. They will pivot from fighting regulation to begging for a single, predictable federal standard. The question won’t be if we get federal AI legislation, but rather what it will look like when it finally arrives after this period of state-led experimentation.
