How Are US Unions Shaping State AI Laws to Protect Workers?

How Are US Unions Shaping State AI Laws to Protect Workers?

I’m thrilled to sit down with Desiree Sainthrope, a legal expert with deep expertise in drafting and analyzing trade agreements, and a recognized authority in global compliance. With a keen interest in the intersection of law and technology, particularly the implications of artificial intelligence in the workplace, Desiree offers invaluable insights into the evolving landscape of labor relations. Today, we’ll explore the urgent push by U.S. unions for state-level AI regulations, delving into the motivations behind this movement, the specific protections being sought, and the challenges unions face in balancing worker rights with technological innovation. Our conversation will also touch on historical parallels, state-specific initiatives, and the broader impact across industries.

How did unions come to prioritize state-level AI regulations as a key battleground for protecting workers?

I think the urgency stems from the rapid integration of AI into workplaces across various sectors, often without sufficient guardrails. Unions have seen firsthand how these technologies can disrupt job security and exacerbate inequalities if left unchecked. The lack of federal action on comprehensive AI legislation has created a vacuum, pushing unions to focus on state legislatures where they can have a more immediate impact. States like California and New York, with their progressive labor histories, have become natural starting points for crafting policies that address everything from job displacement to privacy concerns. It’s really about seizing the momentum at a local level to set precedents that could influence national standards down the line.

What are some real-world examples of AI-related issues in the workplace that have galvanized union action?

There are several stark examples that have raised red flags for unions. In ride-sharing, for instance, drivers have been penalized or deactivated by algorithms based on metrics they can’t fully understand or challenge. In warehouses, automated scheduling systems have led to unpredictable shifts, leaving workers with little control over their lives. Then there’s the issue of biased hiring tools that can unintentionally discriminate based on flawed data sets. These cases aren’t just isolated incidents; they highlight systemic risks that unions are desperate to address before they become even more entrenched. It’s about ensuring that technology serves workers, not the other way around.

Can you unpack the core objectives unions are aiming for with these proposed AI regulations?

At the heart of these efforts is a push for transparency and accountability. Unions want employers to disclose how AI systems are used in decisions like hiring, promotions, or even terminations, so workers aren’t left in the dark about processes that directly affect them. There’s also a strong emphasis on requiring worker input before these systems are rolled out—think consultations or bargaining agreements to ensure the tech aligns with employee needs. Finally, there’s a focus on mitigating bias in AI tools, advocating for regular audits and impact assessments to prevent discriminatory outcomes. These goals collectively aim to create a framework where AI is a tool for fairness, not harm.

Why is workplace surveillance through AI such a significant concern for union members?

Surveillance is a huge issue because it strikes at the core of worker dignity and autonomy. AI-driven monitoring tools can track every move—whether it’s keystrokes in an office or productivity metrics in a retail setting—often without clear boundaries or consent. For union members, this feels like an invasion of privacy and a way to exert undue pressure, sometimes leading to unfair discipline or termination based on opaque data. The fear is that without regulation, these systems could normalize a culture of constant oversight, eroding trust between workers and employers. Unions are fighting to ensure there are strict limits on how surveillance data is collected and used.

What makes California’s approach to AI regulation, especially the rules kicking in during October 2025, stand out?

California’s upcoming rules are groundbreaking because they specifically target bias in employment-related AI systems. Starting in October 2025, employers will be required to assess their AI tools for potential discriminatory impacts, which is a direct response to union advocacy. What’s unique is the proactive nature of this mandate—it’s not just about reacting to complaints but about preventing harm through regular evaluations. This sets a high bar for accountability and could serve as a model for other states. It also reflects California’s broader commitment to tech regulation, leveraging its position as a hub for innovation to push for ethical standards.

How does the historical context of unions dealing with technological change inform their current stance on AI?

Unions have a long history of grappling with tech-driven disruptions, dating back to automation in manufacturing during the mid-20th century. Back then, the fight was over job losses to machines, and unions often secured retraining programs or severance packages through collective bargaining. The current battle over AI echoes those struggles but is more complex due to the intangible nature of algorithms and data. Unlike physical machinery, AI’s impact is often invisible until it’s too late. This history has taught unions that voluntary agreements with companies aren’t enough; they’re now pushing for binding regulations because past experiences show that without legal teeth, protections can be easily sidelined.

What kind of resistance are unions encountering from the tech industry regarding these state regulations?

The tech industry has been quite vocal in its opposition, primarily arguing that these regulations could stifle innovation by imposing burdensome compliance costs. They claim that smaller businesses, in particular, might struggle to meet the requirements for transparency or bias assessments, potentially slowing down AI adoption. There’s also a narrative that such rules create a patchwork of state laws, making it hard for companies to operate nationally. Unions, on the other hand, counter that innovation shouldn’t come at the expense of worker rights, and they’re working to ensure that regulations are practical while still protective. It’s a classic tension between progress and equity that’s playing out in real time.

How are unions addressing the argument that these AI regulations might harm small businesses or hinder technological progress?

Unions are aware of these concerns and are trying to strike a balance. They often emphasize that the goal isn’t to halt AI development but to guide it responsibly. For small businesses, unions and lawmakers are exploring exemptions or scaled-down requirements based on company size or resources, ensuring that the burden isn’t overwhelming. As for hindering progress, unions argue that ethical AI deployment actually fosters long-term innovation by building public trust in these systems. They point to examples where unchecked AI has led to backlash, suggesting that regulation can prevent costly missteps. It’s about creating a sustainable path forward for everyone involved.

What do you foresee as the future of AI regulation in the workplace over the next decade?

Looking ahead, I believe we’ll see a patchwork of state regulations solidify into more cohesive national guidelines, though it won’t happen overnight due to political gridlock at the federal level. States will continue to be the testing ground, with successes in places like California potentially inspiring broader adoption. I expect unions to play a pivotal role in shaping these policies, pushing for stronger worker protections like mandatory retraining programs for those displaced by AI. However, the tech industry’s influence will ensure a constant tug-of-war, so the outcome will likely be a compromise—robust enough to safeguard workers but flexible enough to allow innovation. The real question is how quickly we can adapt these frameworks to keep pace with AI’s rapid evolution.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later