Desiree Sainthrope, a leading expert in technology law and policy, has closely tracked the evolving tensions between state and federal governance of artificial intelligence. With the recent signing of an executive order aimed at centralizing AI regulation, we sat down with her to unravel its complex legal mechanics, the political strategy behind this decisive action, and the profound implications for the tech industry and state sovereignty. Our discussion delves into the constitutional showdown brewing over the Commerce Clause, the fractures this move has created within the tech sector and advocacy groups, and the influential philosophies of the key advisors who shaped this policy.
The executive order establishes a DOJ task force and gives the Commerce Secretary significant power over federal broadband funds. Could you walk us through how a state’s AI law might actually be challenged under this new framework and what could make it “onerous” enough to put that funding at risk?
Certainly. It’s essentially a two-pronged attack. Imagine a state like California or Colorado passes a comprehensive AI safety law with strict auditing and reporting requirements. First, the new litigation task force at the Justice Department, acting as a watchdog, would flag this law as a potential target. They’d analyze whether it creates what they see as an undue burden on companies that operate nationally. Simultaneously, Commerce Secretary Lutnick holds the financial hammer. The term “onerous” is deliberately vague, which gives the administration immense leverage. It could mean anything from a law requiring data localization, which is a logistical nightmare for cloud-based services, to a statute imposing unique design standards that conflict with those in other states. Lutnick could then signal to that state’s governor that their upcoming federal broadband grants are in jeopardy unless they repeal or significantly amend the law. It’s a powerful pincer movement designed to force compliance without ever having to win a final verdict in court.
This order’s legal foundation rests heavily on the commerce clause and federal preemption, a strategy some experts have already questioned. From a constitutional standpoint, what are the strongest arguments states can be expected to mount in defense of their own regulations?
States will argue that they are acting squarely within their traditional “police powers” – the inherent authority to regulate for the health, safety, and welfare of their citizens. They will contend that their AI laws are not intended to regulate interstate commerce, but to protect their residents from specific harms, like algorithmic bias in hiring or loan applications. A state’s legal team would likely cite precedents where courts have allowed state regulations that have an incidental effect on commerce, as long as the local benefits outweigh the burden. Their core argument will be that until Congress passes a specific federal AI law that explicitly preempts state action, there is no direct conflict. They’ll say the White House is overreaching, using a broad interpretation of the commerce clause to invade an area of regulation historically left to the states. The skepticism from legal scholars is well-founded; it’s a high bar to prove a state law unconstitutionally burdens interstate commerce without a federal statute on the books.
After legislative efforts, like the push in the NDAA and Senator Cruz’s bill, failed to pass, the administration pivoted to this executive order. What does this shift in strategy tell us about the current political dynamics surrounding tech regulation, and what are the potential long-term consequences of this approach?
This pivot from legislation to executive action speaks volumes about the gridlock and lack of consensus in Congress. The attempts to attach AI preemption to the defense bill or pass a standalone moratorium failed because there’s no broad agreement on what a federal standard should even look like. This executive order is, in essence, an admission that they couldn’t build a legislative coalition. It’s a much faster, more direct route, but it’s also far less stable. The long-term consequence is the potential for regulatory whiplash. A future administration could reverse this order with the stroke of a pen, throwing the entire legal landscape back into uncertainty. It also sets a contentious precedent of using executive power to preempt state lawmaking in a burgeoning field, which could chill state-level innovation in policy and further polarize the debate, making an eventual legislative compromise even harder to achieve.
The article highlights a fascinating split, not just within the tech industry but also among kids’ safety advocates, over this trade-off between state AI laws and federal online child protections. Could you elaborate on the competing interests that are causing these fractures?
It’s a classic “devil you know” versus “devil you don’t” scenario. For the tech industry, one faction sees a patchwork of 50 different state AI laws as an existential compliance threat. For them, a single, predictable federal law, even one with child safety rules, is the lesser of two evils. However, another faction fears that the proposed child safety carve-outs, like versions of the Kids Online Safety Act, could impose even more burdensome content moderation and design requirements than the state AI laws they’re trying to escape. They see it as trading one problem for a potentially bigger one. On the other side, some kids’ safety advocates are willing to make a pragmatic deal, sacrificing state AI rules to finally get federal online protections they’ve sought for years. But others are horrified by this, arguing that state AI laws are a critical tool for protecting children from algorithmic harms and that giving them up would be a tragic, short-sighted mistake. The fracture really comes down to whether you prioritize uniformity and federal action above all else, or if you believe in a multi-layered, state-led approach to regulation.
White House AI czar David Sacks and advisor Sriram Krishnan were instrumental in this order. Considering their backgrounds, what core policy philosophies do you believe they’ve embedded in this document, and how might that influence the new litigation task force’s priorities?
Given their deep roots in the tech and venture capital world, Sacks and Krishnan almost certainly bring a philosophy that prioritizes permissionless innovation and economic growth, viewing regulatory friction as a direct impediment. You can see this perspective woven throughout the order’s DNA—the goal isn’t to create better AI regulation, but to eliminate what they perceive as burdensome regulation altogether. Their influence will likely shape the task force’s agenda to be highly strategic and business-friendly. I would expect the task force not to go after every single state law, but to target the ones that cause the most significant operational headaches for large, multi-state technology platforms. They’ll prioritize challenging laws that restrict data flows, impose costly design mandates, or create novel forms of legal liability. Their goal will be to make an example out of the most ambitious state laws to create a chilling effect that discourages other states from following suit.
What is your forecast for the legal battles ahead and the future of the state-federal relationship in regulating emerging technology?
