Waiving goodbye to privilege? The hidden risks of AI use
A 2024 survey asked US in-house counsel which category of artificial intelligence (AI) tools posed the greatest risk of compromising attorney client privilege, 49% chose generative AI (tools like Chat GPT). On 10 February the ruling of Judge Jed Rakoff in United States v Heppner, crystalised the fears of in-house counsel.
One of the key issues considered by Judge Rakoff was whether the 31 documents that Heppner generated using an AI tool were protected by either the “attorney-client” privilege or the work product doctrine. The work product doctrine can protect materials prepared in anticipation of litigation from opposing parties. It was ruled that the documents were not protected by either.
Background
Bradley Heppner was arrested in November 2025. Following Heppner’s arrest, Heppner used an AI chatbot to prepare reports outlining his defence strategy and potential arguments. These reports were subsequently shared with counsel.
When the US government asked for production of documents, Heppner argued that the documents were subject to both attorney client privilege and work product protection. Judge Rakoff rejected both.
Why privilege failed
Judge Rakoff held that sharing information with a public AI chatbot was inconsistent with the confidentiality requirements of privilege. Heppner was found to have shared information with the third-party AI chatbot, that had express provisions that data inputs were not confidential. The US government cited the AI privacy policy that allowed for disclosure to third parties, including governmental organisations.
The work product doctrine also failed to be established as AI was used to create documents pursuant to Heppner’s own volition: rather than under the direction of counsel.
Implications
Generative AI presents a shift in the management of confidentiality, primarily because many models learn from inputs. When sensitive data is input into a public AI model, it becomes part of the model’s permanent memory and thus has the potential to resurface in a response to a query made by another person. There is also the possibility that inputs can be accessed by third parties, as it is stored on the service provider’s servers and may be accessed by employees or partners for auditing or safety purposes (for example): it is also at risk of malicious attacks.
Whilst relating to the contemporaneous topic of generative AI, the ruling in Heppner relied on the long-established third-party disclosure principles, the outcome was predictable. The case of Heppner involved a narrow set of facts: a criminal defendant using a public AI system to generate documents without counsel direction.
Conclusion: England and Wales
At present, there is limited clarity on the application of privilege to interactions with generative AI: Heppner offers a useful insight. Whilst Heppner is a case heard in the US, the case poses a stark warning to all. How the courts of England and Wales will approach the same subject remains to be seen.
The contents of this article are intended for general information purposes only and shall not be deemed to be, or constitute legal advice. We cannot accept responsibility for any loss as a result of acts or omissions taken in respect of this article.