Today, we’re thrilled to sit down with Desiree Sainthrope, a distinguished legal expert with a wealth of experience in drafting and analyzing trade agreements. With her deep expertise in global compliance and a keen interest in emerging fields like intellectual property and the implications of AI, Desiree offers a unique perspective on the intersection of law and technology. In this conversation, we’ll explore her journey into AI and technology policy, the skills and strategies that have shaped her career, and her insights on navigating the rapidly evolving landscape of tech governance. From personal stories to practical advice, this interview delves into how lawyers can adapt and thrive in this dynamic field.
Can you share the story of how you got into AI and technology policy? What were the pivotal moments or choices that guided you to this area of law?
My journey into AI and technology policy wasn’t a straight line by any means. I started with a strong focus on trade agreements and global compliance, which often required me to grapple with complex regulatory frameworks. A turning point came when I began working on cases involving intellectual property in the tech sector. I saw firsthand how emerging technologies, like AI, were challenging existing legal norms. That sparked my curiosity, and I started diving deeper into the policy side—attending conferences, engaging with tech experts, and eventually contributing to discussions on governance. It was a natural evolution, driven by a desire to understand how law can keep pace with innovation.
What advice would you give to someone just starting out in law who’s interested in a career in AI governance or technology policy? Where should they begin?
I’d say start by exploring the intersection of law and tech through every avenue available. Take courses or certifications in technology policy or AI governance to build a foundation. Network with professionals in the field—attend symposiums, webinars, or join relevant bar associations. Early on, seek out internships or roles that expose you to tech-related legal issues, even if it’s just drafting contracts for software companies. The key is to immerse yourself in the language and challenges of the tech world while honing your legal skills. Don’t be afraid to ask questions and learn as you go.
Promoting your own story can be a powerful tool in career growth. How have you advocated for yourself in your career, and what specific actions helped create new opportunities for you?
Advocating for myself has been crucial. Early in my career, I made a point to share my insights through writing articles on compliance and tech law for industry publications. That not only helped me clarify my own thinking but also positioned me as someone with something to say. Speaking at events, even small panels or local meetups, was another game-changer. It forced me to articulate my expertise and connected me with people who later became collaborators or mentors. These actions built my credibility and opened doors to projects and roles I hadn’t anticipated.
Sometimes, experiences outside of law can shape your perspective in unique ways. How have your past roles or skills from other areas influenced your work in technology policy? Can you share a specific example?
Before diving fully into law, I worked on international trade negotiations, which required a lot of cross-cultural communication and strategic thinking. That experience taught me how to navigate complex stakeholder dynamics, which has been invaluable in tech policy work. For instance, when working on AI governance frameworks, I’ve used those same negotiation skills to align legal requirements with the practical needs of tech developers. Understanding different perspectives and finding common ground has often been the difference between a stalled policy discussion and a successful outcome.
Balancing career goals with flexibility is often a challenge. How do you manage to stay focused on your objectives while remaining open to unexpected opportunities? Can you recall a moment when adaptability led to a positive result?
I’ve learned that while goals give you direction, rigidity can blind you to amazing possibilities. I try to keep my core values—like ensuring fairness in tech regulation—at the forefront, but I’m flexible on how I get there. A few years ago, I was focused on a very specific role in compliance, but a chance conversation at a conference led me to contribute to a white paper on AI ethics. That project shifted my trajectory, connected me with key players in the field, and ultimately broadened my expertise. Staying open to detours can sometimes take you further than the planned path.
In a fast-moving field like AI, transferable skills are often highlighted as essential. What skills do you believe are most critical for lawyers entering technology policy, and how can they start building these early in their careers?
Analytical thinking and effective communication are at the top of the list. Lawyers need to break down complex tech concepts into actionable policy or legal strategies, and then explain those to non-legal stakeholders. Adaptability is also key—tech evolves so quickly that you have to be comfortable learning on the fly. Early on, aspiring lawyers can develop these by engaging in interdisciplinary projects, taking on tech-related case studies in law school, or even shadowing professionals in tech-heavy roles. Building a habit of curiosity and cross-disciplinary thinking will serve you well.
How important is it for lawyers in AI and tech policy to grasp the technical aspects, like software development? Do you think a deep dive into coding is necessary, or is a broader understanding sufficient?
I don’t think lawyers need to become coders, but a working knowledge of how technology is built and operates is essential. Understanding the software development lifecycle, for example, helps you anticipate where legal risks might arise or how policies will impact innovation. I’ve found that a general grasp—gained through reading, workshops, or just talking to developers—goes a long way. It equips you to ask the right questions and craft more effective governance solutions without needing to write code yourself.
Building connections with tech professionals who don’t have a legal background can be tricky. What strategies have you used to establish rapport with them and better understand their challenges and goals?
I’ve found that approaching tech professionals with genuine curiosity and humility works wonders. I often start by asking them to walk me through their process or explain what keeps them up at night. Listening more than speaking in those early conversations helps build trust. I also make an effort to learn the basics of their field so I can speak their language, even if it’s just at a surface level. For instance, when working with a software team on a compliance issue, I took the time to understand their deployment cycles. That showed I respected their work and made collaboration much smoother.
When tackling AI or tech-related legal issues, researching unfamiliar topics is crucial. Can you walk us through how you prepare to ensure you’re asking the right questions and applying your legal expertise effectively?
My process starts with immersion. I read up on the technology or issue—whether it’s through white papers, industry blogs, or even tutorials—to get a baseline understanding. Then, I identify the key legal intersections, like privacy or liability concerns. I also reach out to subject matter experts for informal chats to fill in my knowledge gaps. Before any major discussion or project, I draft a list of targeted questions to ensure I’m covering all angles. For example, when I first dealt with algorithmic bias, I spent weeks studying case studies and consulting with data scientists to ensure I could frame the legal issues accurately. It’s about building enough fluency to bridge the gap between law and tech.
Experimenting with tech tools or platforms can provide practical insights. Have you ever explored this hands-on approach, and if so, what did you learn that helped in your legal or policy work?
Yes, I’ve dabbled with some basic tools, particularly data analysis platforms, to better understand how AI systems process information. It was eye-opening to see the limitations firsthand—how biases can creep into datasets or how outputs aren’t always as reliable as they seem. That practical experience informed my approach to drafting policies around transparency and accountability. It gave me a clearer sense of what’s feasible to demand from tech companies and where legal guardrails are most needed. Even a small experiment can ground your perspective in reality.
Looking ahead, what is your forecast for the future of AI and technology policy? How do you see the role of lawyers evolving in this space over the next decade?
I think AI and technology policy will become even more central to global governance as these tools permeate every aspect of life. We’ll see greater emphasis on international cooperation to address issues like data privacy and algorithmic fairness, which don’t respect borders. For lawyers, the role will evolve into one of translator and strategist—bridging technical innovation with ethical and legal frameworks. I expect we’ll need to be more proactive, anticipating problems before they arise, and working alongside technologists from the design stage. It’s an exciting time, but it’ll demand continuous learning and adaptability from all of us in the field.
