Desiree Sainthrope stands at the intersection of traditional legal rigor and the high-speed evolution of modern technology. As a recognized authority in global compliance and trade agreements, she has spent years navigating the complexities of international law, and now she is applying that same analytical precision to the integration of artificial intelligence within the legal sector. Her perspective is particularly valuable as the industry moves past the novelty of generative tools toward a more mature phase of implementation. In this conversation, we explore the shifting landscape of legal practice, touching on the necessity of proving return on investment, the transformation of junior associates into strategic managers, and the critical importance of specialized, legal-specific tech stacks. We also delve into the challenges of maintaining ethical standards when faced with a surge of poorly drafted AI filings and how firms can leverage client collaboration to stay ahead in a saturated market.
Law firms have moved past debating AI’s necessity to focusing on specific return on investment. How do you define a granular level of utility for these tools, and what specific metrics should a firm track to justify the investment beyond simply checking a box?
The conversation around artificial intelligence has shifted with incredible speed; the flip from the “can we” of 2023 to the “how and why” of today was almost instantaneous. We are no longer looking for a simple “yes” or “no” regarding adoption, but rather a granular understanding of how these tools impact our bottom line and our efficacy. To justify these investments, firms must move beyond the hype and track metrics that reflect the actual quality and speed of delivery, such as the reduction in hours spent on initial document parsing or the accuracy of data extraction in large-scale reviews. It is about understanding the “why” at a level where we can see a direct correlation between the tool’s output and our ability to provide faster, richer insights to our clients. When everyone has access to the same technology, the real return on investment comes from how deeply that technology is integrated into specific workflows to create a tangible competitive edge.
As technology increasingly handles first drafts and initial document reviews, attorneys are transitioning from doers to managers or directors. What specific skills must junior associates develop to oversee AI output, and how does this shift fundamentally change the traditional apprenticeship model within a firm?
We are witnessing the end of an era where junior associates spent their formative years “doing” the grunt work, such as writing every word of a first draft or manually scanning thousands of documents for a single clause. We are all becoming managers now, and this requires our younger talent to develop advanced oversight and “directing” skills much earlier in their careers. Instead of learning by rote execution, they must learn to critically evaluate AI-generated content, spotting the subtle “probabilistic” errors that a machine might make while a human wouldn’t. This fundamentally disrupts the traditional apprenticeship model because the “learning by doing” phase is being replaced by “learning by auditing.” It creates a sensory shift in the office; the quiet hum of drafting is being replaced by the high-stakes pressure of ensuring that the machine’s foundational work meets our rigorous professional standards.
There is a growing distinction between tasks requiring absolute perfection and those where an AI-generated summary is sufficient for a quick overview. How do you establish internal standards for when to rely on these summaries versus reading full texts, and what are the practical steps for mitigating errors?
The “CliffsNotes” analogy is perfect here because there are certainly moments in a fast-moving case where a high-level summary provides exactly the momentum we need to make a quick tactical decision. However, we must be disciplined enough to know when the “whole book” is required, especially in high-stakes litigation where a single misinterpreted word can derail an entire strategy. Establishing internal standards means creating a clear rubric: if the output is for internal brainstorming, a summary may suffice, but if it is for a court filing or a client-facing opinion, the original text must be verified. Practical error mitigation involves a “trust but verify” protocol where every AI summary is cross-referenced with the source document by a human eye. It is about realizing that while perfection is the goal for the final product, it is not always necessary for every intermediate step of the research process.
Transitioning from predictable, deterministic AI to more autonomous, agentic systems requires a highly flexible governance framework. What specific protocols are necessary to protect client data during these automated multi-step tasks, and how can a firm ensure its rules adapt as quickly as the technology does?
The move from deterministic AI, which follows a predictable path, to agentic AI, which can autonomously combine tasks, represents a significant leap in complexity and risk. We need to build governance frameworks that are not just rigid sets of rules, but flexible systems capable of adapting as the technology changes every few months. This involves strict protocols for data sandboxing to ensure that client information never leaks into general training sets and implementing “human-in-the-loop” checkpoints at every stage of a multi-step automated process. We must also prioritize transparency, ensuring that we can trace the logic of an agentic system back to its source to maintain the integrity of our legal advice. The goal is to protect the firm, the individual attorney, and most importantly, the client, from the unpredictable nature of generative and probabilistic systems.
Many practitioners are seeing an increase in their workload due to poorly drafted AI briefs filed by opposing counsel or self-represented litigants. How should a legal team strategically handle the burden of challenging these flawed filings, and what specific consequences should courts implement to deter this behavior?
It is a frustrating irony that while AI is meant to save time, it is currently creating a massive amount of “clean-up” work when the opposing side uses it unscrupulously. I have seen a real uptick in practice where we have to spend hours debunking briefs filled with hallucinations or irrelevant citations filed by pro se litigants or less careful attorneys. Strategically, firms must develop efficient “rebuttal templates” to quickly flag these systemic AI errors to the court without exhausting their own billable resources. I believe courts need to be more aggressive with sanctions, not just for lawyers who should know better, but also for pro se litigants who use these tools to clutter the docket with nonsense. There needs to be a clear message that while AI is a tool for efficiency, it is not a license to bypass the fundamental duty of accuracy and candor to the court.
Clients are now requesting detailed insights into a firm’s internal AI training programs and even asking to co-develop tools. How do you structure these collaborative sessions to demonstrate true value, and what role does transparent client service play in differentiating a firm when everyone uses similar software?
We are seeing a fascinating shift where clients are no longer satisfied just knowing we have bought the latest tools; they want to be in the room, participating in our educational programs and co-developing solutions. These collaborative sessions should be structured as workshops where we map out the client’s specific pain points and show exactly how our AI-enhanced workflows provide faster, more data-rich insights than the competition. Differentiation in this environment comes down to superior client service—being the firm that uses technology to get in front of problems before the client even sees them. By being transparent about how we use these tools, we build a level of trust that goes beyond the software itself. It’s about showing the client that the technology doesn’t replace the lawyer, but rather amplifies the lawyer’s ability to protect the client’s interests.
Many firms are currently auditing their entire tech stack to ensure every tool is compliant with ethical rules or replaced by legal-specific variations. What are the step-by-step phases of such an audit, and how do you identify which general-purpose tools are too risky for a legal environment?
We are entering a paradigm shift where the use of general-purpose technology is becoming increasingly untenable in a legal setting, necessitating a comprehensive audit of every tool we use. The first phase of such an audit is discovery—identifying every piece of software currently in use across the firm, from note-taking apps to document management systems. Next, we must evaluate each tool against our specific ethical duties, particularly regarding data privacy and the protection of attorney-client privilege. Any tool that uses client data to train a public model is immediately flagged as too risky and must be replaced by a legal-specific variation that offers a closed, secure environment. Finally, we implement a continuous monitoring phase, because a tool that is compliant today might change its terms of service tomorrow, potentially putting the firm at risk.
What is your forecast for AI in the legal industry?
I believe we are heading toward a landscape where the “AI or no AI” debate will be a distant memory, replaced by a profession that is entirely defined by how effectively we manage these autonomous systems. In the near future, the most successful firms will be those that have fully transitioned their staff into high-level directors who oversee “legal-specific” ecosystems of agentic tools, ensuring that every output is ethically sound and strategically sharp. We will see a greater divide between firms that use AI as a mere cost-cutting “cure-all” and those that use it to deepen their expertise and provide unparalleled client service. Ultimately, the human element—the ability to navigate complex litigation strategies and maintain deep client relationships—will remain the most valuable asset, but it will be supported by a technological infrastructure that is faster, smarter, and more integrated than anything we saw at the start of this decade.
