I’m thrilled to sit down with Desiree Sainthrope, a legal expert with a wealth of experience in drafting and analyzing trade agreements. As a recognized authority in global compliance, Desiree also has a keen interest in the intersection of law and emerging technologies, particularly the role of artificial intelligence in the courtroom. Today, we’ll explore how AI tools like ChatGPT are reshaping legal proceedings, from empowering self-represented litigants to introducing new risks and challenges. We’ll dive into the successes, the pitfalls, and the future of this rapidly evolving landscape.
How do you see AI tools like ChatGPT changing the way people approach representing themselves in court?
AI tools are fundamentally altering the legal landscape by giving people who can’t afford traditional legal representation a way to navigate the system. I’ve seen individuals use these tools to draft motions or research case law, which can be a game-changer in small-claims or straightforward disputes. However, the downside is that many don’t realize the limitations of AI. It’s not a substitute for a trained lawyer, and over-reliance can lead to disastrous outcomes when the technology produces inaccurate or misleading information.
What’s your perspective on whether AI is making the legal system more accessible or creating more hurdles for users?
It’s a double-edged sword. On one hand, AI can democratize access by helping people who feel shut out due to financial constraints. I’ve heard of cases where individuals used AI to prepare filings and actually won, particularly in smaller disputes. On the other hand, it introduces significant hurdles because the output isn’t always reliable. When people don’t know how to verify what AI generates, they risk submitting flawed documents, which can frustrate judges and waste court resources. Accessibility is only valuable if the tools are used responsibly.
Can you share any success stories you’ve come across where AI has helped someone in a legal case?
Absolutely. I’ve encountered a few instances in small-claims courts where self-represented litigants used AI to draft arguments or organize their evidence and ended up prevailing. One case that stands out involved a tenant facing eviction who used an AI tool to structure a defense based on local housing laws. They won because the AI helped them present a clear, logical argument that the landlord couldn’t counter. These successes often hinge on the simplicity of the case and the user’s willingness to double-check the AI’s work.
What do you think separates successful uses of AI in court from those that fail?
Success often comes down to the user’s diligence. In cases where AI works, the individual typically uses it as a starting point—maybe to draft a document or brainstorm ideas—and then verifies every detail, like checking case law or statutes manually. Failures happen when people treat AI as gospel. If they don’t cross-check citations or understand the context of their case, they’re likely to submit nonsense, like fabricated cases or irrelevant arguments, which courts quickly dismiss or penalize.
What are some of the biggest risks for someone using AI to prepare legal documents without a lawyer’s guidance?
The biggest risk is inaccuracy. AI can “hallucinate,” meaning it might invent case law, misquote statutes, or provide advice that sounds plausible but is completely wrong. I’ve seen filings where entire arguments were built on nonexistent precedents, and the litigant had no idea until it was pointed out in court. This can lead to sanctions, fines, or even damage to their credibility. Without legal training, it’s hard for someone to spot these errors before they cause real harm.
How often have you seen judges react to mistakes in AI-generated filings, and what’s their typical response?
Judges are increasingly aware of AI use, and their reactions depend on the severity of the mistake. I’ve observed cases where they’ve issued warnings or admonishments for citing fake cases or submitting poorly formatted documents that scream AI involvement. In more serious instances, they’ve imposed fines or community service, especially if it seems like the litigant is wasting the court’s time. Most judges I’ve dealt with emphasize that ignorance isn’t an excuse—whether you’re a lawyer or representing yourself, you’re expected to verify your work.
Have you or someone you know ever encountered a major error from relying on AI in a legal context?
Yes, I’ve seen it firsthand with a colleague who experimented with AI for drafting a motion early on when these tools were gaining traction. The AI cited a case that sounded legitimate, but when we checked, it didn’t exist. Thankfully, we caught it before filing, but it was a wake-up call. The lesson was clear: AI can be a helpful starting point, but you can’t skip the step of validating every single detail. It’s like using a calculator—you still need to know if the answer makes sense.
What steps would you recommend to someone using AI for legal work to avoid costly mistakes?
First, treat AI as a rough draft, not a final product. If it gives you a case citation, look it up on a trusted legal database or court website to confirm it’s real. Second, cross-check outputs using multiple tools or sources—don’t just trust one platform. Third, if possible, consult with a legal professional, even briefly, to review critical filings. And finally, educate yourself on the basics of your case type so you can spot when something the AI suggests doesn’t add up. Verification is everything.
How common are sanctions or penalties for errors in AI-generated court filings in your experience?
They’re becoming more common as AI use grows. I’ve seen a range of penalties, from small fines to community service hours, especially in cases where the errors are blatant or repetitive. Courts are cracking down because these mistakes clog up the system and waste everyone’s time. While sanctions aren’t handed out in every instance—some judges opt for warnings first—they’re a real risk, particularly for self-represented litigants who keep submitting problematic filings without learning from prior mistakes.
What is your forecast for the role of AI in the legal system over the next decade?
I think AI will become an indispensable tool for both lawyers and self-represented individuals, but its role will depend on how we address current challenges. We’ll likely see more tailored AI platforms designed specifically for legal work, with better accuracy and built-in verification features. For the legal system to truly benefit, though, there needs to be widespread education on responsible use, alongside stricter guidelines from courts on AI-generated content. I’m optimistic, but only if we balance innovation with accountability.