Have you, your paralegal, your client, your expert, etc. used public generative artificial intelligence (“AI”) to perform research while preparing for a USPTO invalidity proceeding as a challenger or patent owner? For prior art, for claim construction, or for legal issues? Did you use any of the public generative AI work product in your USPTO pleadings or expert declarations?
If so, then you may be wondering how public generative AI is considered in the context of attorney-client privilege and the work product doctrine. In other words, does the use of public generative AI constitute unprotected, discoverable material that can be sought in related or future litigation?
Two federal district courts have recently addressed these issues in non-IP contexts—one court held yes, while the other court held no. We believe the case facts and analysis in both cases are instructive for USPTO practitioners.
This article tries to predict how these court decisions may make their way into USPTO practice. Accordingly, this article first summarizes current case law and then offers suggestions for best practices on how practitioners, their experts, and their clients may approach USPTO challenges to avoid privilege and work product violations allowing for discovery.
Finally, we offer a warning from at least one federal appeals court – a zero-tolerance policy on hallucinations and misrepresentations.
Warner and Heppner
In Warner v. Gilbarco (“Warner”), Magistrate Judge Anthony P. Patti in the U.S. District Court for the Eastern District for Michigan ruled that work product protection applied to materials generated by a public generative AI in the course of litigation.[1] The Court held that public generative AI is “tools, not persons” and that if uploading information onto an AI platform waived protection, then this “would nullify work-product protection in nearly every modern drafting environment, a result no court has endorsed.”[2]
In U.S. v. Heppner (“Heppner”), also discussed in our recent IP Hot Topic, Judge Rakoff in the U.S. District Court for the Southern District of New York held that documents a defendant had generated using a publicly available AI-platform were not protected by either the attorney-client privilege or the work product doctrine.[3]
Exploring the circumstances of the cases might shed some light on the divergent outcomes.
A key distinction between Warner and Heppner is the role of the “party” and its interactions with AI. Warner involved a pro se litigant using public generative AI to prepare documents for her employment-discrimination case. Recognizing the pro se plaintiff as a “party” under Federal Rule of Civil Procedure (FRCP) 26(b)(3)(A), the Court found the plaintiff’s interaction with the AI platform as her own mental impressions and deemed the resulting AI output as materials prepared in anticipation of litigation.[4] In that limited context, the Court characterized “generative AI programs” as “tools, not persons.”[5]
In contrast, Heppner featured AI-generated defense strategy “reports” that were prompted by the client on a publicly available AI-platform “[w]ithout any suggestion from counsel that he do so.”[6] Counsel did not manage the client’s use of AI to generate these defense strategy reports.[7] Even if the AI-generated reports were prepared in anticipation of litigation, the Court found that these documents were “not prepared by or at the behest of counsel.”[8]
Another difference between Warner and Heppner concerns the courts’ treatment of AI platform privacy policies and terms of service. In Warner, the Court adhered to the traditional perspective that work product is waived only by disclosure to an adversary or in a manner likely to reach an adversary.[9] Taking this approach, the Court did not analyze or rely on any AI privacy policy or terms of service.[10]
By contrast, in Heppner, the Court explicitly considered the AI platform’s privacy policy, noting that it “reserves the right to disclose” user data to third parties, including “governmental regulatory authorities.”[11] Given this public accessibility, the Court concluded that the defendant therefore had no “reasonable expectation of confidentiality.”[12]
Predicting the Impact of Warner and Heppner to USPTO Validity Challenges and a Warning About Failure To Detect Hallucinations
While federal courts have characterized AI differently—as a tool in some contexts and as a third-party disclosure in others—practitioners should proceed with caution.
Warner should not be considered a safe harbor for practitioners because its factual posture diverges sharply from Patent Trial and Appeal Board (PTAB) or Central Reexamination Unit (CRU) post-grant practice at the USPTO. First, unlike the pro se nature in Warner, post-grant practice involves coordinated contributions from multiple actors, including the practitioner framing the legal strategy, the expert witness supplying technical opinions, and paralegal staff assisting the production of submitted documents. Introducing generative AI into any one of these roles raises attribution and privilege issues that were absent in Warner’s solo plaintiff setting. Second, in reexamination and other post-grant proceedings, practitioners formulate invalidity assessments, claim constructions, and prior art mappings that ultimately become public record. Courts may find any generative AI output performed in preparing a petition or expert declaration not as “internal analysis and mental impressions,” but instead as third-party reasoning. Once AI-generated analysis is incorporated into USPTO filings, the user’s interaction with the AI platform may be considered fair game for discovery.
The USPTO has publicly emphasized that existing PTAB/Trademark Trial and Appeal Board rules continue to apply when AI is used and has noted the risk that AI-assisted submissions may contain hallucinated or inaccurate content—making verification and record integrity salient considerations when AI-assisted material is used in a Board filing.[13] Separately, the USPTO’s Federal Register guidance anticipates AI use in PTAB filings and underscores human governance and risk mitigation in practice before the Office, including confidentiality-sensitive workflows.[14] Taken together, invoking a Warner-like “AI as tool” framing in a PTAB context would likely prompt these practical questions: (i) who supervised the AI use; (ii) what controls were used to reduce the chance of disclosing sensitive information; and (iii) what steps were taken to validate accuracy before incorporating AI-assisted analysis into the petition/declaration that becomes part of the public record.
The importance of supervision over AI use is especially clear in view of Whiting v. City of Athens, Tennessee, which was recently decided by the U.S. Court of Appeals for the Sixth Circuit. In this decision, the Court held that the inclusion of improper AI-based arguments in a Brief is a sufficient basis to dismiss an appeal.[15] Specifically, the Court noted that “[citing] even a single fake case can be sanctionable because ‘no brief, pleading, motion, or any other paper filed in any court should contain any citations—whether provided by generative AI or any other source—that’ a lawyer has not personally ‘read and verified.’”[16]
Additionally, practitioners should be aware that courts do not perceive a significant privacy interest in user interactions with AI platforms. In the recent case of In re OpenAI, Inc., Copyright Infringement Litigation, the U.S. District Court for the Southern District of New York held that there are reduced “privacy interests in users’ conversations with ChatGPT which users voluntarily disclosed to OpenAI and which OpenAI retains in the normal course of its business.”[17] While this case did not concern AI-generated work product for legal arguments, it highlights the courts’ willingness to compel disclosure of user interactions with AI.
Against this backdrop, Heppner provides a more realistic, cautionary framework for practitioners. Incorporating AI-generated reasoning into legal analysis, without counsel’s guidance, jeopardizes privilege and work product protection. Heppner suggests that interactions with publicly available AI-platforms to develop invalidity arguments for petitions and expert declarations are unlikely to receive protection under attorney-client privilege or the work product doctrine.
In April 2024, the USPTO issued a Request for Comments seeking public input on how the proliferation of AI may affect prior art, the knowledge of a person having ordinary skill in the art, and related patentability determinations—reflecting the Office’s recognition that AI raises unresolved questions about record integrity, provenance of technical analysis, and downstream legal effects.[18] Viewed in that light, one can hypothesize that a Heppner-like fact pattern (e.g., a client or expert uses a public AI system and incorporates the output into a filing) could draw scrutiny in USPTO-related disputes or later parallel litigation, particularly where parties contest the origin or reliability of invalidity theories, claim constructions, or prior-art mappings. In that scenario, an opponent might invoke the USPTO’s expressed concerns to argue that AI prompts, outputs, or logs are relevant information and were not developed under a counsel-directed workflow—an argument that would echo themes present in Heppner’s privilege and work-product analysis.
Conclusion
It is imperative that practitioners, experts, and clients use AI and generative AI to enhance and increase efficiency of our practices. But we need to do so with care in view of our ethical obligations to preserve privilege and the work product doctrine and avoid hallucinations and mischaracterizations.
These cases are starting to show that, to avoid discovery of research, one must either be the attorney or under control of an attorney forming theories for a case.
[1] See Warner v. Gilbarco, Inc., No. 2:24-CV-12333, 2026 WL 373043 (E.D. Mich. Feb. 10, 2026).
[2] Id. at 3-4.
[3] See United States v. Heppner, No. 25 CR. 503 (JSR), 2026 WL 436479 (S.D.N.Y. Feb. 17, 2026).
[4] See Warner, No. 2:24-CV-12333, 2026 WL 373043 at 3-4.
[5] Id. at 4.
[6] See Heppner, No. 25 CR. 503 (JSR), 2026 WL 436479 at 3.
[7] Id., at 9-10 (“Heppner’s counsel confirmed that the AI documents were prepared by the defendant on his own volition.”)
[8] Id.
[9] See Warner, No. 2:24-CV-12333, 2026 WL 373043 at 3-4.
[10] Id.
[11] Id. at 6, 7.
[12]Id at 6, 7.
[13] See Director Vidal, “The Applicability of Existing Regulations as to Party and Practitioner Misconduct Related to the Use of Artificial Intelligence,” https://www.uspto.gov/sites/default/files/documents/directorguidance-aiuse-legalproceedings.pdf (Feb. 6, 2024).
[14] Guidance on Use of Artificial Intelligence-Based Tools in Practice Before the United States Patent and Trademark Office, 89 Fed. Reg. 25609-01 (April 11, 2024).
[15] Whiting v. City of Athens, Tenn., No. 24-5918, 2026 WL 710568 (6th Cir. Mar. 13, 2026).
[16] Id. at 4 (citing Noland v. Land of the Free, L.P., 114 Cal.App.5th 426, 336 (2d Dist. Sept. 12, 2025).
[17] In re OpenAI, Inc., Copyright Infringement Litig., No. 25-MD-3143 (SHS) (OTW), 2026 WL 21676, at 2 (S.D.N.Y. Jan. 5, 2026).
[18] Request for Comments Regarding the Impact of the Proliferation of Artificial Intelligence on Prior Art, the Knowledge of a Person Having Ordinary Skill in the Art, and Determinations of Patentability Made in View of the Foregoing, 89 Fed. Reg. 34217-02 (April 30, 2024).
Related Industries
Related Services
Receive insights from the most respected practitioners of IP law, straight to your inbox.
Subscribe for Updates