As technology continues to reshape our lives, the intersection of state and federal policies on AI safety and kids’ online protection has become a battleground for innovation and regulation. Today, we’re thrilled to speak with Desiree Sainthrope, a legal expert with extensive experience in drafting trade agreements and a recognized authority on global compliance. With her deep knowledge of intellectual property and the evolving implications of technologies like AI, Desiree offers unparalleled insight into how California’s recent tech laws are influencing national policy debates and what this means for the future of the industry.
Can you walk us through why tech giants are now seeing California’s approach to AI and kids’ safety laws as a potential national model?
Absolutely. For years, California was viewed as a thorn in the side of Silicon Valley with its progressive, often stringent regulations. But recently, the state has crafted laws on AI safety and age verification that strike a balance—imposing necessary protections while avoiding overly punitive measures that could stifle innovation. Tech giants appreciate this pragmatic approach because it offers predictability and shields them from harsher rules or lawsuits that could emerge elsewhere. It’s a shift from antagonism to a kind of cautious partnership, where California’s framework is seen as something that could be replicated across the country without derailing business models.
What specific elements of California’s new laws are most appealing to the tech industry?
The appeal lies in the details. For instance, the AI safety law sets clear guidelines for managing risks without imposing draconian restrictions that could halt development. Similarly, the age-verification rules protect children online but include provisions that limit liability for platforms if something goes wrong. These laws are strict enough to satisfy public demand for safety but crafted in a way that doesn’t expose companies to endless legal battles or unfeasible compliance costs. It’s a middle ground that tech leaders feel they can work with.
How does Governor Gavin Newsom’s relationship with the tech sector play into the shaping of these laws?
Newsom’s ties to the tech industry, dating back to his time in San Francisco, have been pivotal. He’s positioned himself as a mediator, listening to industry concerns while still pushing for regulation. His willingness to veto overly restrictive bills—like last year’s AI safety proposal—has paved the way for more industry-friendly versions. It’s a strategic move, balancing public interest with Silicon Valley’s needs, and it’s created a dialogue that wasn’t there before. His influence has helped temper some of the more aggressive legislative impulses in Sacramento.
What are tech lobbyists doing to promote California’s model in other states?
Tech lobbyists are actively pitching California’s framework as a ready-made solution to state legislators who might not want to spend time or political capital crafting their own rules. They’re framing it as a proven, balanced approach that avoids messy conflicts over AI or age-verification policies. Their strategy is to target states where tech regulation is already on the agenda, suggesting that adopting California’s model can save time and prevent drawn-out battles with the industry. It’s a pragmatic sell—less about ideology and more about efficiency.
How is the tech industry approaching the AI safety bill currently on Governor Kathy Hochul’s desk in New York?
In New York, lobbyists are working hard to influence Governor Hochul, who has the unique ability to amend legislation unilaterally. They’re pushing her to revise the stricter AI safety bill on her desk to align more closely with California’s softer approach. The hope is that she’ll see the value in harmonizing with a state that’s already navigated these waters, making New York’s law more palatable for tech companies while still addressing safety concerns. It’s a delicate lobbying effort, focused on collaboration rather than confrontation.
What challenges are tech companies facing from Republican-led states like Utah and Texas in contrast to California’s approach?
In states like Utah and Texas, there’s a growing wave of tech-skeptical sentiment among Republican lawmakers. These states have passed or are considering age-verification laws and AI safety rules that are far stricter than California’s. For example, Texas has implemented requirements that put large platforms at significant legal and financial risk, which tech companies see as overreach. Unlike California’s balanced framework, these laws often prioritize populist concerns over industry input, creating a patchwork of tough regulations that could disrupt operations and innovation.
How is the tech industry attempting to shape federal policy on AI and kids’ safety regulations amidst this state-level activity?
At the federal level, the industry is advocating for national standards that could preempt the conflicting state laws emerging across the country. Leaders from major tech firms are emphasizing the need for a unified approach, arguing that a patchwork of state rules creates uncertainty and hampers progress. They’re pointing to California’s laws as a potential blueprint for federal policy, hoping to avoid the harsher measures coming from both red and blue states. However, with Congress gridlocked and other priorities dominating Washington, the likelihood of federal action anytime soon seems slim.
Can you shed light on the debate over age-verification laws in Ohio and how it mirrors the larger national conversation?
Ohio is a fascinating microcosm of the national debate. Legislators there are torn between adopting California’s more tech-friendly age-verification model, which adds protections for kids without heavily burdening platforms, and leaning toward stricter rules like those in Texas or Utah. Tech lobbyists are pushing hard for the California approach, warning that harsher laws could expose companies to significant risks. This split reflects the broader tension across the U.S.—balancing child safety with innovation, and deciding whether to prioritize industry input or public skepticism of Big Tech.
Looking ahead, what is your forecast for the future of tech regulation in the U.S., especially as political dynamics continue to shift?
I think we’re heading toward a fragmented landscape in the short term, with states continuing to experiment with their own approaches to AI and kids’ safety. California’s model might gain traction in some areas, especially among states looking for a middle path, but resistance from both progressive and conservative lawmakers elsewhere will keep the patchwork alive. Federally, I don’t see significant movement until after the next election cycle, when tech regulation might become a more urgent priority. The bigger question is whether a national standard can emerge before state-level rules become too entrenched, and whether the industry can maintain this newfound rapport with regulators like those in California. It’s going to be a bumpy ride, but the stakes for innovation and safety are incredibly high.
