With a distinguished career centered on the intricate frameworks of global trade and compliance, Desiree Sainthrope has a unique vantage point on the legal profession’s monumental shifts. Her work, which often involves navigating the complex interplay between established law and emerging technology, has made her a sought-after voice on the integration of artificial intelligence into legal practice. In our conversation, she explores the subtle yet profound ways AI is reshaping the industry, from the cautious adoption strategies of elite firms and the paradoxical burden of verifying AI outputs to the critical need for new training paradigms for young lawyers and the evolving, often tense, dialogue with clients about value and billing in an age of automation.
One firm leader described their AI strategy as being a “fast follower” to avoid recklessness. What are the practical risks of this cautious approach versus the competitive disadvantages of being too slow to adopt AI tools in a core business function like legal services?
That “fast follower” mindset perfectly captures the tension I see pulsating through the industry. On one hand, the legal profession is built on precedent and risk aversion. A single mistake in a contract or a lawsuit can have billion-dollar consequences, so the fear of being reckless is visceral and completely understandable. The problem is, this caution can curdle into stagnation. One junior associate I spoke with put it perfectly when they said equating AI adoption with recklessness is a huge mistake. Law is the core business of these elite firms. To willingly position yourself as a “follower” in something so fundamental to your future efficiency and service delivery is a massive strategic risk. The competitive disadvantage isn’t just about speed; it’s about insight, capability, and ultimately, relevance. While you’re cautiously waiting, another firm is learning how to leverage AI to analyze discovery documents in half the time, deliver better insights to clients, and price their services more competitively. The risk of being too slow isn’t just falling behind; it’s becoming obsolete.
Lawyers find that verifying AI output for high-stakes tasks can negate productivity gains. What specific workflows or quality control strategies have you seen effectively reduce this verification time without compromising the quality of the legal work? Please share a step-by-step example.
This is the central paradox holding back wider adoption, and it’s a very real frustration. I heard a story about a junior associate whose attempt to use Microsoft Copilot for analysis ended up taking more time to double-check than it saved. The key is to shift from passively reviewing a finished product to actively directing the AI as a research assistant. A highly effective workflow I’ve seen combines prompting strategy with verification shortcuts. First, instead of asking the AI to simply summarize a document, a patent litigator I know instructs the model to “show its work.” Specifically, he prompts it to embed direct quotes from the source material to support every key assertion in its summary. Second, he uses that AI-generated summary not as a draft, but as an interactive table of contents. He can quickly scan the key points. Third, when he sees a critical proposition he needs to confirm, he takes the exact quote provided by the AI and uses “Ctrl-F” in the original document. This immediately transports him to the source text in context. This method transforms verification from a painful, line-by-line rereading process into a series of targeted spot-checks, dramatically cutting down the time while maintaining rigorous quality control.
It’s been observed that corporate groups adopt AI more readily than litigation groups, where opponents might exploit any error. Beyond simply verifying outputs, how does this adversarial pressure change the ideal workflow or choice of AI tools for litigators compared to their transactional counterparts?
The difference in adoption between corporate and litigation is fascinating because it’s so deeply psychological. A partner I know framed it as the difference between a “good-enough practice” and a “perfection practice.” In a corporate deal, your goal is to close the transaction efficiently and protect your client. In litigation, your opponent’s goal is to actively ruin your day and dismantle your case. Every single word you produce is scrutinized for weakness. This constant adversarial pressure completely changes the calculus. For litigators, the ideal workflow involves cordoning off AI for internal, “first pass” tasks. It’s brilliant for getting up to speed by summarizing a massive court docket or hours of deposition transcripts—work that will never be seen by the other side. When it comes to choosing tools, a litigator will prioritize traceability over creativity. A tool like Google’s Gemini, which can highlight the specific text in a source document that supports its claim, is far more valuable than a more powerful generative model that might produce a beautifully written but subtly inaccurate argument that could be exploited in court. For them, the cost of a mistake isn’t just a financial loss; it’s a loss of credibility and strategic position.
Some senior partners worry that if junior lawyers use AI for routine tasks, they won’t develop foundational legal judgment. How can firms integrate AI into junior associate training in a way that accelerates learning without sacrificing the development of core analytical skills?
This is a legitimate and profound concern. A partner posed the question, “If you haven’t made the closing checklist or mapped out the triggering conditions for a merger, will you know enough to catch mistakes when they arise?” You can’t develop a feel for the law if you’ve never done the foundational work. The solution isn’t to ban AI, but to integrate it into a scaffolded training model. Firms should encourage junior associates to use AI for the “first and last pass” of a project. For the “first pass,” they can use it to generate a very rough draft of a research memo, familiarizing them with the landscape of an issue. For the “last pass,” they can use it to proofread, polish the tone of an email, or check citations. The crucial, middle part of the process—where they make key strategic decisions, structure the core arguments, and exercise true legal judgment—must still be done manually, with intensive partner oversight. This approach treats AI as an accelerator for the low-level tasks, freeing up more time for associates to shadow senior lawyers, receive mentorship on high-level strategy, and learn the art of lawyering, not just the process.
Many lawyers struggle to find use cases for AI, while others prefer specialized tools like Harvey that offer pre-built workflows. What are the key trade-offs for a lawyer choosing between a powerful general model like Claude and a domain-specific tool that may be less advanced?
The choice between a general model and a specialized tool comes down to a trade-off between power and accessibility. I spoke to one associate who was just so incredibly busy that he didn’t have time to dream up potential use cases for AI. For him, and many like him, a tool like Harvey is a godsend. It presents a clear menu of options: “translate documents,” “analyze court transcripts,” “extract data from court filings.” It removes the cognitive load of figuring out how to use the technology and just lets you start using it. The downside, as another lawyer pointed out, is that the underlying models of these specialized tools often lag behind the state-of-the-art. He told me he still prefers Claude for tasks involving public information because it’s simply a “better model.” So, the trade-off is this: do you want a user-friendly, pre-packaged solution that gets you 80% of the way there with minimal effort, or do you want the raw power of a cutting-edge model that requires more skill and creativity to prompt effectively but might yield a superior result?
The billable hour model can create a conflict between a firm’s revenue and a client’s desire for AI-driven efficiency. How should lawyers proactively discuss AI use and pricing with clients to align incentives, especially when different clients have vastly different goals?
This is the commercial and ethical heart of the matter. The billable hour creates a fundamental misalignment of incentives. If AI makes a lawyer twice as efficient, the firm’s revenue gets cut in half, which is an untenable business model. The solution must be proactive and transparent communication. Lawyers need to stop making assumptions and start having frank conversations with their clients at the outset of a matter. I heard about one senior associate who does this masterfully. He has one client who wants a “scorched earth” approach, leaving no stone unturned, and for them, he does most of the work manually. But another client tells him to “work cheap and focus on the 80/20 stuff,” and for them, he uses AI extensively and focuses verification on the most critical clauses. The conversation should be explicit: “We have AI tools that can significantly increase our efficiency on certain tasks. We can offer a fixed fee for this part of the project, a reduced hourly rate, or stick to a traditional model. What outcome is most important to you?” By presenting options, you shift the dynamic from a conflict over hours to a collaboration over value.
What is your forecast for the legal profession’s relationship with AI over the next five years?
Over the next five years, I predict the gap between the AI-adopters and the laggards will widen into a chasm. The conversation will move beyond whether to use AI to how to fundamentally restructure firm operations and culture around it. Given that researchers have found AI agent capabilities are doubling roughly every seven months, the idea of a one-time “AI strategy” will become obsolete. The most successful firms will institutionalize a process of constantly re-evaluating tools and retraining their lawyers. We’ll see the decline of the pure billable hour for many routine tasks, replaced by a menu of fixed-fee, subscription, and other value-based pricing models that clients will begin to demand. The existential fear of being replaced will fade, and in its place will be the competitive reality that lawyers who don’t effectively partner with AI will be unable to compete with those who do. The most valuable lawyer in 2029 won’t be the one who knows the most case law, but the one who knows how to ask the right questions—of their clients, of their junior associates, and of their AI.
