Linklaters Launches AI Practice for Complex Legal Matters

Linklaters Launches AI Practice for Complex Legal Matters

Desiree Sainthrope is a leading legal authority with deep expertise in global compliance and the intricate mechanics of international trade agreements. As the legal landscape shifts toward data-heavy environments, she has become a pivotal voice in how artificial intelligence is integrated into the practice of law. We discuss the recent move toward creating bespoke AI practices that unite technical and legal talent in the front office. This conversation explores the structural, financial, and strategic evolution required to handle the uncommonly large and complex data sets that define modern legal challenges.

Moving data scientists from back-office support into front-office roles creates a direct partnership with attorneys on live matters. How does this structural shift change the daily interaction between technical and legal teams, and what specific challenges arise when integrating these distinct professional cultures into a single unit?

This shift is a fundamental reimagining of the legal workspace, where we move away from treating data scientists as a hidden “help desk” and instead seat them at the table as equal architects of a legal strategy. By launching a team composed of three attorneys and three data scientists, we are creating a high-energy environment where the technical staff feels the immediate pressure and the high stakes of a live matter. The primary challenge is often a collision of vocabularies; lawyers speak in terms of risk, precedent, and nuance, while data scientists prioritize probability, architecture, and code efficiency. Breaking down these silos requires an intense period of adjustment where both sides learn to translate their expertise into a shared language that serves the client. When it works, you can feel the momentum shift in the room as a scientist discovers a technical shortcut that solves a problem an attorney thought would take weeks to manually review.

Effective triage is necessary when determining which complex data sets require bespoke AI solutions versus standard firm tools. What specific criteria do you use to evaluate matter suitability, and how do you ensure these advanced workflows integrate seamlessly with the work of traditional practice groups?

Our triage process is built on identifying matters that have a level of data complexity that would essentially break our standard firm-wide tools like Legora. We look specifically for projects involving uncommonly large data sets or those that require a level of customization that our existing practice groups simply couldn’t handle on their own. Part of our role is to act as a bridge, ensuring that once we build a bespoke workflow, it doesn’t exist in a vacuum but actually enhances the work of the traditional lawyers. We also evaluate the potential for repeatability; if we build a specialized tool for one complex financing package, we look for ways to apply that same logic to future matters across different clients. It is a constant balancing act between creating something “one-of-a-kind” and ensuring that the innovation is scalable enough to be useful for the entire firm.

Bank regulatory compliance and class-action triage involve increasingly expensive and complex data layers. When applying AI to these high-stakes areas, what steps are taken to ensure the technology handles the heavy lifting while still allowing for human judgment and advice to remain the primary value?

In high-stakes arenas like bank regulatory work, the goal of the AI is to clear the fog so the attorney can actually see the landscape they are navigating. Banks are under immense pressure to prove compliance across multiple jurisdictions, a process that has become prohibitively expensive using traditional, labor-intensive methods. We use AI to manage the massive data layer—such as triaging thousands of claims in a class action or scanning reports against complex financing covenants—but we never let the tool have the final word. The technology provides the structure and the “first pass,” but the unique value we offer is the judgment and advice that a partner layers over the top of those results. There is a certain sensory relief for a client when they see a mountain of digital chaos distilled into a clean, actionable legal opinion that a human expert is willing to stand behind.

Utilizing a variety of large language models and data enrichment platforms requires a sophisticated technical infrastructure. How do you select the specific model or API for a particular matter, and what is the step-by-step process for cleaning and enriching datasets before they reach the legal team?

We operate with a broad, model-agnostic toolkit that allows us to select the best “brain” for the specific task at hand, whether that means leveraging models from Google, Anthropic, or OpenAI. Our infrastructure relies on Microsoft Foundry and Google Vertex to access these models, but the real work starts long before an LLM is ever engaged. We use APIs to pull data from various sources and then deploy Databricks as our central hub to collect, clean, and enrich those datasets. This step-by-step process ensures that we aren’t feeding “garbage” into the AI; we are providing high-fidelity, structured information that has been scrubbed of noise and errors. By the time the data reaches the legal team, it has been transformed from a raw, messy pile into a sophisticated asset that is ready for legal analysis.

Adopting a fixed-fee model for AI-driven work represents a departure from traditional hourly billing. How does this pricing structure affect the way you scope a project’s complexity, and what are the financial trade-offs between providing automated efficiency and maintaining high-level human legal expertise?

Moving to a fixed-fee model is a matter of integrity; it would be hypocritical to sell the efficiency of AI while still clinging to an hourly billing structure that penalizes that very speed. When we scope these projects, we focus on the value of the solution rather than the time spent, which allows us to be much more transparent with the client from the outset. Typically, a client will see a specific “Applied Intelligence” line item on their invoice, which sits alongside the traditional legal work performed by other practice groups. This creates a clear financial boundary where the client pays for the bespoke technical architecture we’ve built and the expert human judgment that interprets its output. The trade-off is that we must be incredibly disciplined in our initial assessment of a project’s complexity to ensure that the fixed fee reflects the true value of the technical leap we are taking for the client.

What is your forecast for AI in the legal sector?

I believe we are entering an era where the distinction between “legal work” and “data work” will almost entirely disappear, especially in fields like structured finance and regulatory compliance. My forecast is that we will see a rapid shift where the “AI lawyers” and data scientists who currently make up specialized cohorts will become the foundational blueprint for every practice group in the firm. We will move away from using AI as a separate “tool” and instead treat it as the very infrastructure upon which all legal advice is built, allowing us to handle datasets that are currently considered “too large” to even contemplate. Ultimately, the firms that thrive will be those that can marry the cold, hard efficiency of a Databricks pipeline with the nuanced, high-stakes judgment that only an experienced attorney can provide.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later