Desiree Sainthrope is a renowned legal expert with a deep background in drafting and analyzing trade agreements, as well as a recognized authority in global compliance. Her expertise extends to diverse areas of law, including intellectual property and the transformative role of emerging technologies like artificial intelligence. In this engaging conversation, we explore how generative AI is reshaping legal workflows, the art of crafting effective prompts to optimize AI outputs, and the challenges and opportunities professionals face in adapting to these tools. Desiree shares her insights on training strategies, the impact of prompting on review processes, and the future of AI in law firms.
How did you first become aware of the shift from research to review time when using generative AI in legal work?
I noticed this shift a few years ago when AI tools started becoming more accessible in legal settings. Traditionally, research was the time-consuming part—digging through case law, statutes, and precedents could take hours or even days. But with generative AI, that research phase shrank dramatically. A tool could pull together a draft or summary in minutes. However, what I quickly realized was that the time saved on research was now being spent on reviewing the output. We had to scrutinize every detail to ensure accuracy, relevance, and alignment with our firm’s standards. It was a trade-off, and it made me see how critical it was to get the input right from the start.
Why do you believe prompting has emerged as such a vital skill for legal professionals working with AI?
Prompting is vital because it directly shapes the quality of the AI’s output. In law, precision matters—whether you’re drafting a contract or summarizing a case, a small misstep can have big consequences. A poorly worded prompt can lead to vague or irrelevant results, which means more time spent fixing errors in the review phase. On the flip side, a well-crafted prompt acts like a clear set of instructions, guiding the AI to produce something usable right out of the gate. For legal professionals, mastering prompting isn’t just a nice-to-have; it’s becoming a core competency to stay efficient and competitive.
Can you break down what makes a ‘good prompt’ and why it’s so important in getting reliable AI results?
A good prompt is clear, specific, and sets the right expectations for the AI. It’s like giving directions to someone who’s unfamiliar with the terrain—you can’t just say, “Write a contract.” You need to specify the type of contract, the jurisdiction, the tone, and any key clauses to include. This matters because AI models thrive on structure. Without it, you get output that’s either too broad or completely off-target. A good prompt reduces guesswork for the AI, which in turn cuts down on the time and effort needed to refine the results. It’s all about front-loading the clarity to avoid a mess later.
What are some common pitfalls people fall into when crafting prompts for AI tools?
One of the biggest pitfalls is being too vague. For example, asking an AI to “summarize a case” without mentioning which case, the jurisdiction, or the specific points to focus on often leads to generic or irrelevant summaries. Another issue is overloading the prompt with too many requests at once, like asking for a summary, analysis, and draft in a single go. This confuses the model and dilutes the quality of the response. Lastly, people sometimes forget to set the tone or context—like specifying if the output should be formal for a court filing or conversational for a client email. These oversights create outputs that need heavy editing.
How does narrowing the scope of a prompt lead to better outcomes when working with generative AI?
Narrowing the scope is like zooming in on exactly what you need. If you ask an AI for a broad overview of intellectual property law, you’ll get a sprawling response that might not address your specific concern. But if you narrow it to, say, “Explain copyright infringement defenses under U.S. law for software developers,” the output becomes much more focused and actionable. This precision helps the AI avoid irrelevant tangents and deliver content that’s directly useful. It also makes the review process faster since you’re not wading through unnecessary information.
What were some of the key takeaways from your session on effective prompting at ILTACON ’25?
During the session, we emphasized that prompting isn’t about finding some secret phrase—it’s about clarity and intent. We walked through real examples of how a vague prompt leads to unusable output, while a structured one saves hours of rework. We also stressed the importance of iteration; prompting is a skill you build over time by experimenting and refining your approach. The live demo was a big hit because it showed this in action, letting the audience see firsthand how small tweaks in wording could transform the AI’s response from confusing to spot-on.
How do you use repetitions and scoring to train lawyers in crafting better prompts for AI tools?
Repetitions are key to building any skill, and prompting is no different. We set up exercises where lawyers write prompts for specific tasks—like drafting a memo or summarizing a regulation—and then run them through an AI tool. We do this over and over with slight variations to help them see what works and what doesn’t. Scoring comes in as immediate feedback; we evaluate their prompts based on how clear, specific, and effective they are at getting the desired output. This instant critique helps them adjust on the fly and internalize the principles of good prompting. It turns abstract concepts into practical learning.
How does a well-crafted prompt influence the review phase after an AI generates a draft?
A well-crafted prompt can make the review phase so much smoother. When the prompt is clear and specific, the AI’s draft is more likely to be accurate and aligned with what you need, so you’re not spending hours correcting factual errors or rewriting entire sections. For instance, if your prompt specifies the tone and audience for a client letter, the draft won’t come out sounding like a court filing. This cuts down on preventable mistakes and lets you focus on fine-tuning rather than overhauling. In my experience, a good prompt can halve the review time, which is huge in a field where time is money.
What are some of the biggest challenges you’ve seen legal professionals face when integrating generative AI into their workflows?
One major challenge is trust—many lawyers are hesitant to rely on AI outputs because they’re worried about accuracy or ethical implications. They’ll double-check everything, which can negate the time-saving benefits. Another hurdle is adapting to the technology itself; some professionals struggle with the learning curve of crafting effective prompts or understanding the limitations of AI. Tone is also a sticking point—AI might produce something that’s factually correct but doesn’t match the firm’s voice or client expectations. Overcoming these challenges requires both training and a mindset shift to see AI as a tool, not a replacement for human judgment.
What is your forecast for the role of generative AI in law firms over the next decade?
I think generative AI will become an indispensable part of legal practice over the next decade. We’re already seeing it streamline repetitive tasks like document drafting and legal research, and I expect that to expand into more complex areas like predictive analytics for case outcomes or automated compliance checks. But the human element—judgment, ethics, and client relationships—will remain irreplaceable. The firms that thrive will be those that invest in training their teams to use AI effectively, especially in mastering skills like prompting. I also foresee tighter regulations around AI use in law to address privacy and bias concerns, which will shape how we integrate these tools. It’s an exciting time, but it’ll require careful navigation.