AI Civil War Erupts in New York Congressional Race

AI Civil War Erupts in New York Congressional Race

Today we’re speaking with a leading analyst in campaign finance, who has been tracking the unprecedented surge of money from the artificial intelligence industry into politics. This isn’t just about lobbying; it’s a full-blown “civil war,” with employees from the world’s biggest AI labs and activists from safety-focused nonprofits pouring millions into key political races, often on the same side. This battle of ideologies and dollars is setting the stage for how one of the most powerful technologies in human history will be regulated.

We’ll explore the unusual alliances forming between corporate AI developers and safety advocates, and what this reveals about the deep divisions within the industry. We will also delve into the financial arms race between powerful new political action committees, examining the strategies behind their massive fundraising efforts. Our conversation will also touch upon the personal motivations driving rank-and-file tech employees to make substantial individual donations, and how a network of safety organizations is coordinating its efforts to achieve specific legislative goals on a national scale.

We’re seeing employees from major labs like Alphabet and OpenAI, alongside those from AI safety groups like Coefficient Giving, heavily funding a candidate like Bores. What does this unusual alliance reveal about the internal divisions and key priorities shaping the AI community’s political engagement?

This alliance is incredibly telling. It signals a profound ideological rift that has been simmering inside these labs for years and is now boiling over into the public, political arena. You have employees from giants like Alphabet and OpenAI, who contributed a combined $115,000, standing shoulder-to-shoulder financially with dedicated safety groups like Coefficient Giving, which dropped a staggering $95,000 on one candidate. This isn’t a typical corporate lobbying effort. It suggests a significant portion of the very people building this technology feel a deep, urgent sense of responsibility—or perhaps fear—that their corporate leadership isn’t adequately addressing. Their priority is clearly to elect officials who will impose guardrails, creating a fascinating schism where the creators are actively funding the regulators.

One industry PAC has $70 million on hand to oppose Bores, while another, Public First, has raised tens of millions from insiders at the same AI labs. How do you see this “battle of the PACs” playing out, and what specific strategies might each side employ to win influence?

This is shaping up to be a classic David versus Goliath fight, but in this case, David also has access to tens of millions of dollars. The PAC with $70 million on hand, Leading the Future, will likely unleash a traditional, overwhelming campaign of negative advertising, trying to define Bores as anti-innovation or anti-business. Their strategy is brute financial force. On the other side, a PAC like Public First, which is raising its funds from lab insiders, has a more surgical and compelling narrative. They can leverage the credibility of their donors—the actual engineers and researchers—to argue that the call for regulation is coming from inside the house. Their strategy will be to use these authentic voices to cut through the noise, framing their campaign not as an attack on AI, but as a necessary step to ensure its safe development for humanity.

Donations from employees at a single company like OpenAI have exceeded $50,000 for one candidate, with another in California receiving over $475,000 in a single quarter. What specific policy outcomes are motivating this level of individual political spending from rank-and-file tech employees?

When you see individual employees, not just executives, contributing thousands of dollars—like the $57,000 from OpenAI workers for one candidate or the nearly half-million-dollar haul for a California politician—it’s driven by a powerful conviction that the stakes are incredibly high. These aren’t just abstract political donations; they are personal investments in averting what these individuals see as a potential catastrophe. They are likely motivated by a desire for very specific policies: mandatory third-party audits of frontier AI models, liability laws that hold companies accountable for their creations, and government oversight to prevent a reckless race to more powerful, and potentially uncontrollable, systems. For them, this spending is a direct line to self-preservation and ensuring the technology they are building doesn’t backfire on a global scale.

A wide range of AI safety organizations, from Redwood Research to the Centre for Effective Altruism, are directing significant funds toward specific political races. What core legislative goals unite these diverse groups, and how do they coordinate their activities to maximize their political impact?

What unites these seemingly disparate groups—from research-focused organizations like Redwood and Epoch AI to more philosophical ones like the Centre for Effective Altruism—is a shared belief in existential risk. Their core goal is to establish a robust, proactive regulatory framework before AI development outpaces our ability to control it. They see a closing window of opportunity. Their coordination is visible in their targeted funding; they aren’t just throwing money around. By concentrating significant funds—like the nearly $45,000 from Redwood Research and $35,000 from 80,000 Hours to a single candidate—they are clearly identifying and backing politicians who are receptive to their message. This creates a unified political front, maximizing the impact of every dollar to push for foundational safety legislation.

With major fundraising efforts emerging in both a New York congressional race and a California House race, what does this indicate about the national political strategy of the AI safety movement? Can you describe what the next frontlines in this industry conflict might look like?

The bicoastal nature of this fight, erupting simultaneously in New York and California, demonstrates a deliberate and sophisticated national strategy. This isn’t just about one or two rogue candidates; it’s the AI safety movement establishing a political battlefield in key tech and policy hubs. They are proving they can mobilize significant capital—over $475,000 for one California candidate in a single quarter—and build coalitions anywhere. The next frontlines will likely move beyond individual races to broader legislative pushes at the federal level. We can expect to see them target key committee members in Congress, fund primary challenges against incumbents they see as obstacles, and pour money into shaping the national conversation through issue-based ad campaigns, making AI safety a central issue in states far from Silicon Valley.

What is your forecast for how this influx of AI industry money will reshape political campaigns and tech regulation in the coming election cycle?

I forecast that this influx of money will force AI regulation to become a mainstream, top-tier political issue far faster than anyone anticipated. It will no longer be a niche topic for tech policy wonks. The sheer scale of the spending—with one PAC holding $70 million and its rival raising “tens of millions”—guarantees that candidates in key districts will be forced to take a clear stance on AI safety. This “civil war” will create a new political litmus test, and we’ll see campaigns transformed by attack ads not about taxes or healthcare, but about the existential risks of artificial intelligence. Ultimately, this will accelerate the timeline for comprehensive federal regulation, as the political pressure from both sides becomes too immense for Washington to ignore.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later