Desiree Sainthrope stands at the intersection of international trade and cutting-edge technology, offering a seasoned perspective on how global legal frameworks are struggling—and succeeding—to keep pace with the rapid evolution of artificial intelligence. With an extensive background in drafting complex trade agreements and managing global compliance risks, she has become a leading voice for corporations attempting to navigate the volatile landscape of AI law. Her work often involves dissecting the granular details of intellectual property in the digital age, ensuring that businesses do not just adopt new tools, but do so with a defensive posture that protects their long-term interests.
This conversation explores the transition of AI from a technical curiosity to a core business risk that demands high-level governance. We discuss the divergence between the structured rollout of the European Union’s AI Act and the enforcement-heavy approach of United States agencies like the FTC and EEOC. Our discussion also touches upon the operational realities of Colorado’s landmark AI legislation, the critical importance of vendor contract negotiations, and the necessary shift from treating AI as an IT issue to an enterprise-wide risk management priority.
The EU AI Act is entering its final rollout phases while U.S. agencies like the FTC focus on deceptive marketing and business-opportunity schemes. How should firms reconcile these differing regulatory philosophies, and what operational steps ensure a unified compliance strategy across both jurisdictions?
Reconciling these two philosophies requires a shift in mindset from “checking boxes” to “proving accountability.” In Europe, we see a very structured, phased approach where certain dates are etched in stone—for instance, prohibited practices became illegal on February 2, 2025, and rules for general-purpose AI models took effect on August 2, 2025. This creates a predictable, though rigid, roadmap that contrasts sharply with the U.S. approach, where the FTC is already aggressively policing “AI lawyers” and deceptive marketing without a single unified code. To bridge this gap, a firm must first conduct a comprehensive inventory of every AI tool in use, documenting who approved it and what specific data it touches. From there, you must categorize these tools by risk level, mirroring the EU’s framework while simultaneously applying the FTC’s consumer protection standards to any external-facing tool. The final step is establishing an AI literacy program by August 2, 2026, to ensure that every employee understands the risks of the systems they operate, effectively creating a “safety-first” culture that satisfies both Brussels and Washington.
Colorado’s AI law requires reasonable care to prevent algorithmic discrimination and mandates human review of adverse, high-risk decisions. What practical workflows can businesses implement to facilitate this “human-in-the-loop” requirement, and how can they document these reviews to satisfy potential regulatory audits?
When Colorado’s law takes full effect on June 30, 2026, “reasonable care” will cease to be a vague suggestion and will become a documented mandate. Businesses should implement a “double-check” workflow where any high-risk decision—such as denying credit or a job application—is flagged by the system for a mandatory review by a trained human analyst. This isn’t just a cursory glance; the reviewer must have the authority to override the AI and must document the specific reasoning for either upholding or reversing the machine’s suggestion. To satisfy an audit, companies should maintain a “governance log” that records the date of the decision, the data points the AI used, the identity of the human reviewer, and the ultimate outcome. Providing public-facing summaries and clear notice to consumers when AI is a “substantial factor” in a decision adds a layer of transparency that feels less like a cold calculation and more like a responsible business practice.
Federal agencies now apply existing anti-discrimination laws to AI-driven hiring, screening, and performance evaluations. When an automated tool produces a biased result, what internal escalation paths must be in place, and how can a company prove its governance met the legal standard?
The EEOC has made it crystal clear that using a shiny new tool does not grant you a “get out of jail free” card regarding federal discrimination laws. If a screening tool inadvertently begins filtering out candidates based on protected characteristics, the internal escalation path must move immediately from the HR department to the legal and risk management teams. You need a pre-defined “circuit breaker” protocol where the tool is suspended the moment a bias outlier is detected, followed by a forensic audit of the training data. To prove governance in a real-world scenario—such as a lawsuit over a biased promotion algorithm—the company must produce evidence of periodic reassessments and “stress tests” performed on the software before and during its deployment. It is one thing to say you have a policy, but it is quite another to show a court a paper trail of active testing that demonstrates you were looking for bias before it became a legal crisis.
Generative AI involves complex risks regarding digital replicas, copyrightable outputs, and the protection of confidential information. When negotiating with third-party vendors, what specific indemnity and data-deletion clauses are most critical, and how should companies verify their data isn’t used for training?
Negotiating with AI vendors is currently like the Wild West, but your contracts need to be built like a fortress to protect your intellectual property. You must insist on clear “non-training” clauses that explicitly prohibit the vendor from using your confidential business data or trade secrets to improve their underlying models. Indemnity clauses are equally vital; you need the vendor to shoulder the legal burden if the AI produces an output that infringes on a third party’s copyright or creates an unauthorized digital replica. Furthermore, specific data-deletion provisions should be included, ensuring that once a contract ends, your data is scrubbed from their systems with a certificate of destruction provided as proof. To verify these claims, savvy companies are now demanding “right-to-audit” clauses, allowing them to periodically check the vendor’s data handling practices to ensure that “confidential” truly stays confidential.
Many organizations have written AI policies that do not align with their actual daily operations. How can leadership successfully move AI risk from the IT department to enterprise risk management, and what evidence of review is most persuasive to a court?
The most dangerous mistake a CEO can make is assuming that AI is a “tech problem” for the IT guys to handle in the server room. To move AI risk to Enterprise Risk Management (ERM), leadership must appoint an AI Governance Officer who bridges the gap between technical capability and legal liability. This transition involves taking those “glossy AI policies” and turning them into operational checklists that are reviewed at the board level every quarter. A court is rarely impressed by a well-written handbook that has been sitting on a shelf; what they find persuasive is tangible evidence of “active oversight,” such as minutes from risk committee meetings where AI performance was challenged. When you can show that leadership identified a high-impact use case, assessed the consequences, and assigned a specific person to be accountable for the results, you transform a potential liability into a defensible business process.
What is your forecast for AI governance?
I believe we are moving toward a period of “radical transparency” where the era of the “black box” algorithm is effectively over. By 2026, the companies that thrive will not be those that integrated AI the fastest, but those that can explain their AI’s decisions most clearly to a regulator or a judge. We will see a surge in mandatory third-party audits, much like financial audits, where independent experts certify that a company’s AI systems are compliant with both the EU AI Act and evolving U.S. state laws. Ultimately, governance will become a competitive advantage; customers and partners will gravitate toward firms that can prove their automated systems are safe, unbiased, and legally sound. The future belongs to the “accountable innovators” who realize that a careful legal review today is a much better investment than an expensive settlement tomorrow.
