Desiree Sainthrope is a leading legal authority whose work sits at the intersection of global trade compliance and the disruptive influence of emerging technologies. With a career dedicated to navigating complex regulatory frameworks and drafting high-stakes agreements, she has become a pivotal voice for attorneys grappling with the rapid integration of artificial intelligence in litigation. Her deep understanding of intellectual property and procedural nuances provides a critical lens through which we can examine how modern tools are reshaping the traditional boundaries of attorney-client privilege.
When a client decides to use a consumer AI tool like ChatGPT or Claude independently to process their case details, how does this specifically impact the legal standing of attorney-client privilege, and what precise elements within a platform’s privacy terms should counsel analyze to determine if the expectation of confidentiality has been legally negated?
The independent use of consumer AI by a client is a potential privilege landmine because it often fails the fundamental “expectation of confidentiality” test required by law. In cases like United States v. Heppner, the court found that feeding case strategy into a consumer chatbot destroyed privilege because these platforms are not attorneys and do not inherently owe a fiduciary duty of secrecy. When I analyze a platform’s privacy policy, I look for “training” clauses—specifically whether the company reserves the right to use user inputs to improve their models or if human reviewers can access the prompts. If the policy states that data is shared with third parties or used for global product development, the legal expectation of privacy effectively evaporates. Attorneys must also look for the lack of enterprise-grade encryption or “zero-retention” agreements, as the absence of these protections serves as a signal to the court that the client was not operating in a truly private environment.
In scenarios where a client uses AI to draft case timelines or summarize complex legal strategies without an attorney’s oversight, what are the most significant risks to the original communications, and what concrete steps can a firm take to try and rehabilitate a case if a waiver is suspected?
The most alarming risk is the “subject matter waiver,” where a court decides that by sharing a portion of an attorney’s advice with an AI, the client has waived privilege over the entire communication or strategy related to that topic. For instance, if a client asks ChatGPT to summarize a 10-page liability assessment you wrote, the defense may argue they are now entitled to see your original, unredacted notes because the seal of secrecy was broken. To rehabilitate such a case, a firm should immediately conduct a “privilege audit” to determine exactly what was uploaded and when. You might argue that the disclosure was “inadvertent” under Federal Rule of Evidence 502, provided you took prompt steps to rectify it, though this is an uphill battle if the client acted intentionally. The most effective move is to quickly establish that the AI output was merely a personal “drafting aid” and did not disclose the core legal mental impressions, while simultaneously shifting the focus to whether the platform’s terms actually constituted a public disclosure in the strictest sense.
Current legal trends distinguish between AI as a “tool” versus a “third-party expert” under attorney supervision. How does this distinction shift the landscape of discovery requests, and what protocols should a firm implement to keep AI-assisted work protected?
This distinction is the difference between an unprotected “fishing expedition” and a shielded work-product environment. If the AI is used merely as a drafting tool—similar to a word processor—courts like the one in Warner v. Gilbarco are hesitant to allow discovery because it would nullify work-product protections for almost all modern digital drafting. However, if the AI is treated as an expert or a collaborator, the Heppner ruling suggests that attorney direction is the “magic ingredient” that maintains privilege. To protect these workflows, firms must move away from consumer-grade bots and use dedicated legal AI platforms that offer “opt-out” data training and SOC2 compliance. You should also issue formal, written “AI Instruction Memoranda” to clients, explicitly directing them to use only approved, firm-vetted tools for specific tasks, thereby framing the AI as an extension of the legal team rather than an outside third party.
Given that corporate defendants are increasingly utilizing AI to analyze internal investigations or evaluate liability, how can plaintiff firms use the Heppner logic to strategically uncover these materials during discovery?
Plaintiff firms should be playing offense by specifically targeting AI-generated summaries and prompts in their Requests for Production. Because corporate employees often turn to “convenience” tools—like using a standard ChatGPT account to summarize a long internal meeting—they may unknowingly waive the corporation’s privilege if counsel didn’t strictly direct that specific session. I recommend drafting discovery requests that ask for “all prompts, inputs, and outputs generated by generative AI tools related to the subject matter of the investigation.” If you can show that an HR manager or a mid-level executive used a consumer-grade tool without a formal “legal hold” or attorney supervision on that specific platform, the Heppner precedent becomes a powerful lever to argue that those internal liability assessments are no longer privileged and must be produced.
Beyond updating engagement letters, what specific questions should be added to the intake process to catch prior AI usage, and how can attorneys explain the risks of these “convenience” tools to clients who are anxious about their case?
The intake process needs to be far more granular than just asking about social media; we now need to ask, “Have you used any AI tool, chatbot, or digital assistant to research your injuries or summarize any documents you received?” Many clients don’t see ChatGPT as a “disclosure”—they see it as a more advanced Google search. You have to explain to them, using sensory and relatable terms, that typing their case details into a chatbot is like shouting them in a crowded coffee shop where the barista is recording everything to sell to a research firm. I tell clients that while these tools feel like private assistants, they are actually “data vacuums” that can hand the defense a roadmap to defeat their claim. It’s about emphasizing that their “private” 2 a.m. venting session to an AI could literally be read aloud by the opposing lawyer in a deposition.
What is your forecast for the future of legal privilege as consumer AI becomes more integrated into daily communication?
I anticipate a massive “privilege correction” over the next three to five years where courts will be forced to draw much sharper lines between consumer-grade and enterprise-grade technology. We will likely see a new body of case law that treats AI prompts as the modern equivalent of “waived” emails, leading to a surge in malpractice claims against attorneys who failed to warn their clients about these tools. Ultimately, the burden of maintaining privilege will shift almost entirely to the attorney’s ability to police their client’s digital habits. If we don’t treat AI usage with the same level of caution as we treat talking to the press, the traditional sanctuary of attorney-client confidentiality will become a relic of the pre-digital age.
