Lawmaker Reveals Strategy Behind Failed AI Moratorium

Lawmaker Reveals Strategy Behind Failed AI Moratorium

Desiree Sainthrope’s work places her directly at the heart of one of the most complex challenges facing lawmakers today: how to regulate artificial intelligence. As a legal expert with deep experience in crafting intricate agreements, she offers a unique perspective on the delicate balance between federal authority and state innovation. Following the contentious debate over a proposed AI moratorium in 2025, the conversation has shifted toward creating a national framework. We sat down with Desiree to discuss the misunderstood purpose of the moratorium, the strategic necessity of a sector-specific approach, and the political maneuvering required to build a bipartisan consensus on the future of AI governance in the United States.

The 2025 AI moratorium proposal was characterized as more of a “messaging amendment” than a permanent policy. Could you elaborate on the specific message it was intended to send and why you feel that core purpose was widely misunderstood during the debate?

Absolutely. The moratorium was fundamentally a strategic maneuver, and its purpose was almost entirely lost in the public discourse. It was never intended to be a long-term solution. Honestly, we never expected it to even get out of the Energy and Commerce Committee. The entire point was to force a national conversation that we felt was urgently needed. When it actually got off the floor of the House, I was flabbergasted. The message we were sending was a clear signal to our colleagues: the federal government must go first. The real danger is a chaotic patchwork of fifty different state laws, and the moratorium was our way of hitting the emergency brake to say, “Let’s define the national rules of the road before we go any further.”

Establishing a federal AI framework first seems central to your approach. Could you walk through the practical steps of defining these regulatory “lanes” for states and explain how you would prevent federal preemption from stifling important, state-level innovation?

The key is that federal preemption and the new regulatory framework have to be established at the exact same time, within the same piece of legislation. You cannot do one without the other. The practical first step is for Congress to pass a foundational law that clearly defines what constitutes interstate commerce in the context of AI—that’s the federal lane. This would provide uniform guardrails for things that are national in scope. Simultaneously, that same law must explicitly carve out the areas where states can then go and innovate. We’re not talking about a blanket federal takeover. We’re talking about creating a stable, predictable environment where everyone knows their role, preventing a tangle of conflicting regulations while still allowing states to be laboratories of democracy in designated areas.

The initial moratorium debate fractured party lines, with conservatives split over the issue of states’ rights. Based on that experience, what is your strategy for building a bipartisan coalition now, and what specific compromises will be essential to pass a unified national law?

That fracture was a critical lesson. It showed that this isn’t a simple partisan issue; it cuts right to core philosophies about governance, especially concerning states’ rights. Building a coalition now requires moving past the all-or-nothing approach of the moratorium. The strategy is to focus on the broad agreement that I believe already exists among both Democrats and my fellow GOP lawmakers: the need for a basic national framework. The essential compromise will come down to negotiating the exemptions. The real work is in the details of deciding which specific issues are carved out from federal preemption and left to the states. This is where we’ll have to find a middle ground that honors states’ rights while ensuring a cohesive national technology policy.

You’ve advocated for a sector-specific, risk-based approach to AI regulation. Can you provide a tangible example? Please describe how this model would apply differently to AI in a high-risk area like healthcare versus a lower-risk area like retail advertising.

This approach is all about nuance and avoiding a one-size-fits-all law that would be completely unworkable. Let’s take healthcare. An AI tool used to diagnose cancer or guide a surgical robot is incredibly high-risk. For that, you’d need stringent federal regulations mandating transparency in algorithms, rigorous testing for bias, and ironclad data privacy protections. The stakes are literally life and death. Now, contrast that with an AI used by a retail company to recommend a new pair of shoes. The risk there is minimal—the worst outcome is a bad fashion choice. For that sector, the regulations would be much lighter, perhaps focused on basic consumer transparency. It’s about tailoring the level of oversight to the potential for harm.

A recent executive order identified specific areas, like child safety and government AI procurement, as being appropriate for state regulation. How does this action support your legislative goals, and what other policy areas do you believe should be explicitly left for states to manage?

That executive order was incredibly helpful because it put a presidential seal of approval on the very concept we’re pushing: there are clear lanes for states. It provided a concrete starting point for the conversation. When the president explicitly says that areas like child safety protections and how state governments procure their own AI systems should be regulated at the state level, it gives comfort to those who are worried about federal overreach. This action directly supports our legislative goals by demonstrating what a balanced approach looks like. As for other areas, I think local infrastructure decisions, like the zoning and placement of data centers, and certain aspects of public education are perfect examples of policy areas that should remain firmly in the hands of the states.

What is your forecast for the federal government successfully passing a comprehensive AI regulatory framework that balances national standards with states’ rights within the next two years?

I am cautiously optimistic. The failed moratorium, while messy, successfully elevated the urgency of this issue. There is now a widespread recognition in Congress that inaction is creating a legal vacuum that will only lead to more confusion and litigation down the road. The president’s executive order also provided a valuable template and a show of good faith from the administration. The path won’t be easy, as the debates over specific preemption exemptions will be intense. However, the fundamental consensus for a national framework exists. I believe the momentum is there to get it done, and we have a realistic chance of passing meaningful legislation within that two-year window.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later