As technology continues to reshape our world, few topics are as pressing as the regulation of artificial intelligence. Today, we’re speaking with Desiree Sainthrope, a legal expert with deep experience in drafting and analyzing trade agreements. With her extensive background in global compliance and a keen interest in the evolving implications of AI, Desiree offers a unique perspective on how regulation can shape the future of this transformative industry. In our conversation, we explore the balance between safety and innovation, the societal impacts of AI, the potential economic consequences, and how regulatory frameworks might influence competition in the sector.
How do you see the role of regulation in ensuring the AI industry develops responsibly?
Regulation in the AI industry is crucial to set boundaries that protect society while still allowing for growth. Without clear rules, we risk unchecked development that could lead to ethical dilemmas or harm, like biased algorithms or misuse of personal data. My experience with trade agreements has shown me that well-crafted policies can create a level playing field and build public trust. The challenge is to design regulations that address real risks without stifling the creative potential of AI. It’s about finding a sweet spot—ensuring safety and accountability while giving innovators room to experiment.
What specific societal changes brought by AI do you find most concerning, and why?
I’m particularly worried about the impact on privacy and personal autonomy. AI systems can process vast amounts of data, often without individuals fully understanding how their information is being used. This can erode trust and even influence behavior in subtle, troubling ways. Another concern is the potential for widening inequality—AI could disproportionately benefit those who already hold power or resources. From a legal standpoint, these issues demand frameworks that prioritize transparency and fairness, ensuring that AI serves the broader public good rather than just a select few.
How can regulation address the balance between protecting society and avoiding the risk of slowing down AI innovation?
It’s a tightrope walk, no doubt. Regulation needs to be flexible and adaptive, focusing on principles rather than rigid mandates. For instance, instead of prescribing specific technologies, laws could require regular audits for bias or safety risks. In my work with compliance, I’ve seen that targeted regulations—ones that address specific harms without overreaching—can encourage companies to innovate within safe boundaries. Collaboration between policymakers, industry, and academia is also key to ensure rules evolve with the technology rather than lagging behind and becoming obsolete.
There’s a concern that stricter AI regulations might favor larger companies over smaller startups. How do you respond to that critique?
That’s a valid concern. Larger companies often have the resources to navigate complex regulatory landscapes, while startups can get bogged down by compliance costs. I’ve seen this dynamic in international trade, where smaller players struggle with paperwork that big firms handle easily. To counter this, regulations should include provisions for scalability—perhaps tiered requirements based on a company’s size or market impact. We could also offer support mechanisms, like government-funded legal or technical assistance, to help smaller innovators comply without breaking the bank.
AI is often predicted to disrupt the job market significantly. What are your thoughts on how this might unfold in the coming years?
The potential for job displacement is real, especially in roles that involve routine or repetitive tasks, like data entry or customer support. Drawing from economic studies I’ve reviewed, it’s clear that while AI may eliminate some jobs, history shows technology often creates new opportunities—think of how the internet birthed entire industries. The legal challenge is crafting policies that support workers through this transition, such as retraining programs or incentives for industries that will grow with AI. We need to anticipate these shifts and prepare, rather than just react after the damage is done.
Some argue that fears of massive job losses due to AI are overblown, pointing to historical patterns of adaptation. How do you view this perspective?
I think there’s truth to the idea that economies adapt over time—look at how we moved from agriculture to industry to services. But the pace of AI’s advancement is unprecedented, and adaptation might not happen fast enough for everyone. In my legal analysis, I often consider worst-case scenarios to ensure protections are in place. While I don’t believe we’re facing a complete collapse, we can’t ignore the risk of short-term pain for certain workers or communities. Regulation can help by encouraging investment in education and skills that align with emerging needs, softening the blow of disruption.
What is your forecast for the future of AI regulation over the next decade?
I expect AI regulation to become more globalized over the next ten years, as countries realize that AI’s impacts don’t stop at borders. We’ll likely see more international agreements, similar to trade pacts, setting baseline standards for safety, ethics, and accountability. At the same time, I anticipate a push for harmonization—aligning regional laws to avoid a patchwork that confuses businesses. The big question is whether these efforts will keep pace with AI’s evolution. My hope is that with proactive collaboration, we’ll build a framework that fosters trust and innovation, but it will require constant vigilance and adaptation.
