How Should CX Navigate New AI Regulations?

How Should CX Navigate New AI Regulations?

Today we’re speaking with Desiree Sainthrope, a renowned legal expert whose extensive experience in drafting and analyzing global trade agreements gives her a unique perspective on the intricate web of technology governance. As AI reshapes customer interactions, Desiree’s insights are crucial for leaders navigating this new frontier. Our conversation will explore the complex dynamic between federal ambitions and existing state AI laws, the critical shift from relying on regulatory goalposts to building robust internal governance, and the strategies customer experience leaders must adopt to ensure they are at the center of AI decision-making, not on the sidelines. We’ll also touch on how to build consumer trust in an era of rapid technological change and learn from the missteps of past technology waves.

An executive order signals a federal intent to challenge state AI laws, yet those state rules remain active. How should CX leaders balance scaling AI with the continued risks from this patchwork of regulations? Please share a step-by-step approach for navigating this uncertainty.

This is the central tension for every CX leader right now. The executive order is a signal of intent, not a legal preemption. It reshapes the direction, but it doesn’t erase the laws on the books. So, assuming immediate relief would be a grave mistake. The first step is to conduct a thorough audit to understand precisely how and where you are using AI across the entire customer journey. Second, you must focus your governance efforts on what we call “higher-impact use cases”—areas like customer support resolutions or personalized marketing in sensitive sectors like healthcare or finance. The third, and perhaps most critical, step is to ensure meaningful human oversight is baked into these systems. Don’t assume the technology can run on its own. These state laws are still fully operative, and you must operate as though they will be enforced, because they can be.

A unified federal AI framework could create operational consistency but remove specific state-level guardrails on bias and data handling. What new internal governance processes should CX leaders prioritize to fill this potential gap? Could you provide a specific example of an effective internal standard?

This is an excellent point. Consistency is appealing, but it can create a vacuum where detailed guidance used to be. CX leaders need to pivot from a mindset of compliance to one of proactive governance. The priority should be to establish internal standards that exceed any baseline federal requirements. A powerful example would be creating a “Fairness and Transparency Charter” for all customer-facing AI. This isn’t just a document; it’s an operational framework. It would mandate, for instance, that any automated decision-making system affecting a customer’s service level or financial options must be explainable in plain language, and that there must be a clear, human-staffed appeals process. You would review the systems you already built to meet specific state laws and decide which of those robust practices you want to keep as your own internal gold standard, turning governance into a competitive advantage rather than just an overhead cost.

We often see major AI decisions driven by CTOs and CIOs, which can sideline customer service leaders and lead to failed implementations. What specific strategies can CX leaders use to ensure they have a central role in AI decision-making? Please describe an ideal collaboration process.

This disconnect is one of the primary reasons for the high rate of failed AI implementations we’re already seeing. It’s absolutely critical that CX leaders are not just consulted but are integral to the decision-making process from day one. An ideal collaboration begins before a single line of code is written. The CX leader should co-lead the initial discovery phase with the CTO or CIO, mapping the customer journey and identifying the exact pain points where AI could provide real value, not just technological novelty. They bring the voice of the customer and the frontline agent to the table. This means being in the room to define the project’s goals, select the vendors, and, crucially, establish the metrics for success—which must include customer satisfaction and agent efficiency, not just call deflection rates. Without this partnership, technology is implemented in a vacuum, and it almost always fails to meet customer expectations.

Since consumer trust persists even when specific regulations change, how can organizations proactively demonstrate fairness and transparency in their customer-facing AI? What key metrics should they track to prove their systems are accountable, beyond simply meeting baseline legal requirements?

You’ve hit on a fundamental truth: consumer trust doesn’t disappear when regulations do. It’s an asset you have to build and maintain constantly. To demonstrate fairness, organizations must go beyond legal compliance and track metrics that prove accountability. For example, regularly audit your AI models for demographic bias in outcomes, not just in the data they were trained on. Track the “appeal rate”—how often customers challenge an AI-driven decision—and the “overturn rate,” where a human agent reverses the AI’s conclusion. Another key metric is “explainability success,” which measures how often your agents can successfully and clearly explain an AI-generated recommendation to a customer. Publicly reporting on these metrics in an annual trust report can be a powerful way to show, not just tell, customers that you are holding your systems accountable.

The current AI moment is being compared to the early, error-prone days of offshore outsourcing. What are the biggest “oops” moments CX leaders should anticipate with AI, and what internal frameworks can they build now to mitigate those risks during this adjustment period?

That comparison to early outsourcing is incredibly apt. There’s the same mix of massive hype, a rush to adopt, and the potential for significant missteps. The biggest “oops” moment I foresee is what I call “empathy failure,” where a chatbot or automated system provides a technically correct but emotionally tone-deaf response to a customer in distress, causing irreparable brand damage. Another is “cascading bias,” where a small, undetected bias in a marketing AI leads to discriminatory offers at a massive scale. To mitigate this, leaders need to build a “human-in-the-loop” framework for high-stakes interactions, ensuring a human can intervene before a crisis. They also need to establish a rapid-response “AI Incident Team,” much like a cybersecurity team, that can quickly diagnose and correct AI errors before they escalate. We are going to move fast, and there will be mistakes; the key is having the resilience to catch and correct them.

What is your forecast for the AI regulatory landscape and its impact on customer experience over the next two to three years?

My forecast is for a period of “structured uncertainty.” In the near term, the confusion will likely increase as federal agencies begin their evaluations and states defend their laws. However, over the next two to three years, I expect we will see a baseline federal framework emerge, likely focusing on high-risk applications and transparency in communications, especially in sectors like finance and healthcare. This won’t be the end of the story, though. This federal floor will empower proactive companies to differentiate themselves through stronger, self-imposed ethical governance. The most successful CX leaders will be those who stop waiting for the regulatory goalposts to settle and start building their own internal standards of excellence. In the end, the regulations will only codify what the best companies are already doing: using AI to build, not break, customer trust.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later