Court Rules Public AI Waives Attorney-Client Privilege

Court Rules Public AI Waives Attorney-Client Privilege

Today we’re speaking with Desiree Sainthrope, a legal expert with deep experience in global compliance and the evolving impact of technology on the legal profession. We’ll be diving into a recent, groundbreaking federal court decision that is sending ripples through the legal community. This ruling, the first of its kind, directly confronts the question of attorney-client privilege when public artificial intelligence tools are used in case preparation. We will explore the critical distinction between open and closed AI systems, the new and heightened responsibility for lawyers to guide their clients on technology use, and the practical steps firms must now take to safeguard one of the most sacred principles in law.

Judge Rakoff’s ruling hinged on the idea that using a public AI tool means no “reasonable expectation of confidentiality.” How does this change the way law firms advise clients on technology use, and what specific, practical steps should clients now take to protect their sensitive discussions?

This ruling transforms our advice from cautionary tales to concrete directives. Previously, we would warn about the risks of using public AI; now, we can point to a judicial opinion that confirms those risks are real and can obliterate privilege. The core issue is that when you type a query into an open AI system like the one Mr. Heppner used, you’re not having a private conversation. You are sending your data to a third party whose privacy policy explicitly states they can collect, retain, and even share that information. The practical steps for clients are now crystal clear: First, they must treat any interaction with a public digital tool as if it were a public conversation. Second, all case-related research or communication involving new technology must be discussed with counsel first. And third, they need to understand that convenience is the enemy of confidentiality—that free, open tool is almost certainly using their data in ways that are incompatible with legal privilege.

The case distinguished between open and closed AI systems. If a client, acting under an attorney’s direct instruction, used a private enterprise AI for case research, how might the work product doctrine apply, and what would that client-attorney collaboration look like step-by-step?

This is precisely where the analysis gets more nuanced and where the work product doctrine could come into play. If the facts were different—if the attorneys at Quinn Emanuel had directed Mr. Heppner to use a secure, closed AI system like ChatGPT Enterprise—the outcome might have been entirely different. In that scenario, the client isn’t just randomly communicating with a third party; they are acting as an agent of the attorney, using a specific tool for the explicit purpose of preparing for litigation. The collaboration would need to be meticulously documented. Step-by-step, it would look like this: The attorney would first vet and approve a specific, private enterprise AI platform. They would then provide the client with a clear, written directive outlining the scope of the research and the exact tool to use. The client would conduct the research within that secure environment, and the resulting analysis, or “work product,” created in anticipation of litigation, would have a very strong argument for protection.

One expert suggested this ruling was a “no-brainer,” given how casually people interact with digital tools. What are the most common tech-related mistakes you see clients make that risk confidentiality, and how do you proactively educate them on secure communication practices before an issue arises?

The “no-brainer” comment hits the nail on the head because it reflects a deep-seated cultural habit. People have become so accustomed to speaking to their phones or typing thoughts into search bars that the line between private thought and public communication has completely blurred. The most common mistake I see is clients using personal email accounts or consumer-grade messaging apps to discuss sensitive case details, assuming they are secure. Another is using public Wi-Fi to access confidential documents. To get ahead of this, education must start at the very beginning of the attorney-client relationship. We now include a “digital communication protocol” in our initial engagement letters, explicitly forbidding the use of public AI for case matters and outlining approved, secure channels for all communications. We have to reset their expectations and make them feel the gravity of the fact that, in a legal context, there’s no such thing as a casual digital conversation.

This decision appears to strengthen the role of legal counsel in managing technology. In what ways does this change the dynamic between attorney and client regarding case research, and what new AI usage protocols or policies should firms immediately consider implementing to maintain privilege?

Absolutely, this ruling places the attorney firmly in the driver’s seat as the “Chief Technology Guide” for their client’s legal matters. The dynamic is no longer just about the client providing facts; it’s about the attorney actively directing the process by which those facts are researched and analyzed, especially when technology is involved. It reinforces that legal strategy and technological practice are now inseparable. Firms need to immediately implement clear, written AI usage policies. These protocols should, at a minimum, create a whitelist of approved, secure, closed-AI systems and a blacklist of all public-facing generative AI tools for any case-related work. The policy must also mandate that any use of an approved AI tool by a client must be done only under the express, documented instruction of their attorney to have any hope of protection under the work product doctrine.

What is your forecast for how attorney-client privilege will continue to evolve and be tested as generative AI becomes more deeply integrated into both corporate and personal life?

My forecast is that we are at the very beginning of a long and complex series of legal challenges that will redefine the boundaries of privilege. As generative AI becomes as common as a search engine, we will see disputes move beyond this initial question of open versus closed systems. The next frontier will involve questions like: If an attorney uses a sophisticated closed AI to analyze privileged documents, is the AI’s output itself a new, privileged work product, or is it merely a summary of existing information? If a company’s internal AI, trained on years of confidential data, is used in litigation, can an opposing party demand to know its training data or algorithms? The courts will be forced to draw increasingly fine lines, and the legal profession will have to become far more technologically literate to even ask the right questions, let alone argue them effectively.

Subscribe to our weekly news digest.

Join now and become a part of our fast-growing community.

Invalid Email Address
Thanks for Subscribing!
We'll be sending you our best soon!
Something went wrong, please try again later