Is SRA Guidance Needed for AI in UK Legal Practice?

Is SRA Guidance Needed for AI in UK Legal Practice?

I’m thrilled to sit down with Desiree Sainthrope, a renowned legal expert with deep expertise in drafting and analyzing trade agreements. With a strong background in global compliance and a keen interest in emerging technologies like artificial intelligence, Desiree offers a unique perspective on how AI is reshaping the legal landscape. In this interview, we’ll explore the growing role of AI in legal practice, the ethical challenges it presents, the regulatory gaps that need addressing, and the practical risks solicitors face in maintaining their duty of competence amidst rapid technological change.

How do you see AI tools being integrated into the daily work of law firms right now, and what’s driving their adoption?

AI tools are becoming a staple in many law firms, especially for tasks that require heavy lifting in terms of time and data. Think contract analysis, legal research, and even drafting routine correspondence like client letters. The drive behind this adoption is twofold: efficiency and competition. Firms are under pressure to deliver faster results while keeping costs down, and AI can crunch through volumes of data in a fraction of the time a human would take. Plus, clients—especially corporate ones—expect their legal teams to leverage cutting-edge tech. It’s almost a badge of credibility now to say you’re using AI-driven solutions.

What do you think are the most significant gaps in understanding among solicitors when it comes to the capabilities and limitations of AI tools?

There’s a real mismatch between perception and reality. Many solicitors view AI as a magic bullet—plug in a query, get a perfect answer. But these tools aren’t infallible. They can miss nuances, pull outdated information, or misinterpret context, especially with complex legal issues. I’ve seen cases where solicitors don’t fully grasp that AI outputs often need rigorous vetting. There’s also a lack of awareness about how these systems are trained or what biases might be baked into them. Without that understanding, it’s easy to over-rely on the tech and underuse critical judgment.

Why do you believe the Solicitors Regulation Authority has been slow to provide specific guidance on AI use in legal practice?

I think it’s a mix of caution and complexity. The SRA likely wants to avoid issuing premature guidance that could stifle innovation or become outdated as AI evolves. Crafting rules for something as dynamic and opaque as AI isn’t straightforward—there’s a risk of being too vague or overly prescriptive. They might also be waiting to see how other jurisdictions handle it or for a critical incident to force their hand. But this hesitation leaves solicitors in a tough spot, navigating uncharted waters without a clear map.

How does the absence of regulatory guidance from the SRA impact solicitors in their everyday decision-making?

It creates a lot of uncertainty. Solicitors are left guessing about how much they need to supervise AI outputs or where their accountability begins and ends. Without clear benchmarks, you might have one firm double-checking every AI-generated document meticulously while another takes a more lax approach, leading to inconsistent standards. It also puts pressure on individual lawyers to create their own protocols, which can vary widely in effectiveness and expose them to risks like negligence claims if something goes wrong.

In terms of the duty of competence, how should solicitors balance the use of AI with their obligation to provide accurate and reliable advice?

The duty of competence doesn’t change just because you’re using AI—it’s still about delivering a proper standard of service. Solicitors need to treat AI as a tool, not a replacement for their expertise. That means understanding what the tool can and can’t do, verifying its outputs, and applying their own legal reasoning. For example, if AI drafts a contract, they should scrutinize it for errors or inappropriate clauses. It’s about maintaining control over the process and ensuring that the client’s interests always come first, regardless of how the work gets done.

What practical steps can law firms take to minimize risks when using AI for tasks like legal research or document drafting?

Firms need to build robust checks and balances. First, training is key—solicitors at all levels should understand the tools they’re using, including their potential pitfalls. Second, implement a review process where AI outputs are vetted by a human, especially for high-stakes tasks like client-facing documents or critical research. Third, keep a clear audit trail of how AI was used in a given piece of work, so if an error pops up, you can trace it back. Finally, firms should consider working with AI vendors who offer transparency about how their tools function and provide support for troubleshooting.

When AI tools miss critical details or produce errors, how should responsibility be assigned between the solicitor and the technology provider?

At the end of the day, the solicitor bears the professional responsibility. Clients hire lawyers, not software, and our duty to them doesn’t get outsourced just because we used a tool. That said, if a provider markets an AI tool as reliable for a specific task and it consistently fails, there’s a case for holding them accountable, perhaps through contractual terms or consumer protection laws. But regulators and firms need to clarify these boundaries. Right now, it’s a gray area, and solicitors are often left holding the bag when things go south.

Looking at other jurisdictions, what can the SRA learn from how regulators like the American Bar Association are addressing AI in legal practice?

The American Bar Association has taken a proactive stance by issuing ethics opinions that emphasize understanding and supervising AI tools. They’ve made it clear that lawyers must ensure the tech they use aligns with their professional obligations. The Law Society of Ontario is also pushing for updated competence frameworks to account for emerging tech. What the SRA can learn here is the value of setting expectations early, even if it’s just non-binding guidance. It gives practitioners a baseline to work from and signals that the regulator is engaged with the issue, which builds trust and consistency across the profession.

What is your forecast for how AI will shape the future of legal practice over the next decade?

I see AI becoming even more integrated, moving beyond routine tasks to assist with complex analysis and strategy, like predicting case outcomes or optimizing negotiation tactics. But with that comes heightened ethical and regulatory challenges. I expect we’ll see more tailored guidance from bodies like the SRA as the technology matures and its risks become clearer. There’s also likely to be a push for greater transparency in AI systems, so lawyers can better understand what they’re working with. Ultimately, AI will amplify what solicitors can do, but only if we pair it with strong oversight and a commitment to upholding our core duties.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later