top of page
Search

AI, Work Product, and the Attorney-Client Privilege


 

Introduction: The Unavoidable Integration of Generative AI and the Emergent Challenges to Core Litigation Protections

 

The legal profession stands at a technological inflection point. The rapid proliferation and adoption of generative artificial intelligence (AI) present opportunities for efficiency and analytical power previously unimaginable. From legal research and document analysis to the drafting of pleadings, AI is poised to revolutionize the mechanics of litigation practice. But this transformative potential is accompanied by profound and emergent challenges to the bedrock principles that protect the sanctity of the attorney-client relationship: the attorney-client privilege and the work product doctrine.


The integration of these powerful tools is no longer a question of “if” but “how.” As legal technologists and practitioners alike express both excitement and significant concern, it is incumbent upon the profession to develop a rigorous framework for navigating this new terrain. This report provides guidelines for litigators confronting the novel discovery and ethical risks posed by AI-assisted legal workflows. Its purpose is to equip practitioners with the doctrinal understanding and strategic protocols necessary to harness the benefits of AI while upholding their fundamental duties to protect client confidences and preserve the strategic integrity of their work. Adherence to a disciplined, principles-based framework is not merely advisable; it is essential for competent and ethical practice in the modern litigation environment.

 

The Doctrinal Framework: Privilege and Work Product in the Age of Artificial Intelligence

 

The effective use of generative AI in litigation demands a foundational understanding of how this technology interacts with the core legal doctrines of attorney-client privilege and work product protection. While related, these protections have distinct elements, scopes, and vulnerabilities, and the choice of an AI platform can have dramatically different consequences for each.

 

The Attorney-Client Privilege: Elements and the Absolute Threat of Third-Party Waiver

 

The attorney-client privilege is the oldest of the privileges for confidential communications known to the common law, designed to encourage full and frank communication between attorneys and their clients. For the privilege to attach and protect a communication from disclosure, four elements must be satisfied: (1) a communication must occur; (2) between privileged persons (e.g., an attorney, a client, or their agents); (3) it must be made in confidence with a reasonable expectation of privacy; and (4) its primary purpose must be for the provision or solicitation of legal advice.


The primary point of failure—the critical vulnerability—for litigators using generative AI lies in the third element: confidentiality. It is a black-letter rule of law that the voluntary disclosure of privileged information to a third party generally effectuates a complete waiver of the privilege. In the context of AI, a commercial vendor of a generative AI platform is unequivocally a “third party.” Consequently, when a lawyer inputs confidential client information into a consumer-grade or public-facing AI tool whose terms permit the vendor to review or use that data, the act constitutes a direct disclosure to a third party. This disclosure destroys the reasonable expectation of privacy and, with it, the attorney-client privilege.


While a narrow “communicating agent” or “functional equivalent” exception exists for third parties whose involvement is necessary for the attorney to render legal advice (e.g., a foreign language interpreter or a forensic accountant), courts are unlikely to extend this protection to general-purpose AI platforms. The distinction between a tool of necessity and a tool of convenience is dispositive. An AI platform used to summarize documents is a convenience, not a necessity for the provision of legal advice. While one may foresee a future argument where a highly specialized, secure AI model is the only tool capable of analyzing a unique and voluminous dataset (e.g., proprietary source code), making it arguably indispensable, this remains an untested legal frontier. For now, the third-party waiver rule presents an absolute and immediate threat to privilege when using improper AI tools.


The Work Product Doctrine: Tiers of Protection and the Adversarial Disclosure Standard

 

The work product doctrine, codified at Federal Rule of Civil Procedure 26(b)(3), protects documents and tangible things “prepared in anticipation of litigation or for trial.” Its purpose is to protect the adversarial system by allowing attorneys to prepare their cases without fear that their strategic thinking will be exploited by opponents.

Critically, the waiver standard for work product is fundamentally different from that of attorney-client privilege. Work product protection is waived only when the material is disclosed to an adversary or in a manner that substantially increases the likelihood of adversarial access. Disclosure to a neutral, non-adversarial third party—such as a technology vendor under a confidentiality agreement—does not automatically waive the protection.


The doctrine provides for two distinct tiers of protection, a distinction that has become central to the analysis of AI-generated materials:


  • Fact Work Product: This category includes factual information gathered by counsel in anticipation of litigation. It is afforded a qualified protection and may be subject to discovery if the opposing party can demonstrate a “substantial need” for the materials and an “undue hardship” in obtaining their substantial equivalent by other means.


  • Opinion Work Product: This category encompasses the “mental impressions, conclusions, opinions, or legal theories of a party’s attorney.” It is considered the core of the doctrine and is afforded near-absolute protection, discoverable only in the rarest of circumstances.

 

The Critical Distinction: Why AI Poses a Greater and More Immediate Threat to Privilege than to Work Product

 

The divergent waiver standards of these two doctrines create a clear hierarchy of risk for litigators using AI. The use of an improper AI tool with confidential client data presents an immediate, binary, and likely fatal threat to the attorney-client privilege. The disclosure to the vendor is a singular event that waives the privilege.


In contrast, the threat to work product is more nuanced and manageable. Because the AI vendor is not an adversary (and arguably akin to a consulting expert when the query extends beyond simple command prompts), disclosure of work product to the platform does not, in itself, waive the protection. The risk shifts from the fact of disclosure to the nature of the material disclosed and the potential for future discovery disputes. A simple, fact-based prompt (e.g., “Summarize this document”) will likely be categorized as fact work product, vulnerable to a “substantial need” argument. A sophisticated, theory-infused prompt, however, may be shielded as opinion work product. This doctrinal distinction dictates that a litigator’s risk-management strategy must be two-fold: first, to prevent privilege waiver by selecting a secure platform; and second, to protect work product by strategically framing all AI interactions to generate opinion work product.

 

The Battleground of Discovery: Judicial Treatment of AI-Generated Materials

 

As AI tools become embedded in litigation practice, discovery disputes over AI-generated materials are providing a new and evolving body of case law. Recent decisions offer critical guidance on how courts are applying traditional work product principles to this novel form of evidence.


Tremblay v. OpenAI: Establishing the “Opinion Work Product” Shield for Strategic Prompts

 

The decision in Tremblay v. OpenAI, Inc., No. 23-cv-03223-AMO, 2024 U.S. Dist. LEXIS 141362, 2024 WL 3748003 (N.D. Cal. Aug. 8, 2024), represents a watershed moment in the judicial treatment of AI-related discovery. In that case, plaintiffs used ChatGPT for pre-suit investigation and included some of the AI-generated summaries in their complaint. See id., 2024 U.S. Dist. LEXIS at *5. The defendant, OpenAI, sought to compel the production of all prompts used by plaintiffs’ counsel, including “negative test results” that did not support their claims. Id.


Initially, a magistrate judge granted the motion to compel, classifying the materials as discoverable “fact work product” and finding that plaintiffs had waived protection by putting their investigation “at issue”. Id. at *6. This initial ruling serves as a stark warning, providing a ready-made argument for adversaries seeking broad discovery of AI research.

On review, however, U.S. District Judge Araceli Martínez-Olguín reversed the order in a decision that provides a foundational roadmap for protecting AI-assisted work. The court reclassified the interactions as “opinion work product,” delivering a critical holding that misapplied law is clear error. Id. at 7-9. The court’s reasoning was dispositive: “the ChatGPT prompts were queries crafted by counsel and contain counsel’s mental impressions and opinions about how to interrogate ChatGPT.” Id. at 7. Three key issues flow from the court’s ruling:


1.    Strategic Prompts as Opinion Work Product: Carefully constructed AI prompts that reflect an attorney’s strategic thinking, legal theories, and case strategy can constitute opinion work product and are entitled to near-absolute protection. Id. at *8-9.


2.   No Broad Subject-Matter Waiver: The inclusion of some AI-generated outputs in a pleading does not automatically waive work product protection for all related research, prompts, and negative results. The scope of waiver must be narrowly tailored. Id.


3.   Process vs. Fruits: The process of AI interrogation—the iterative series of questions and refinements reflecting counsel’s strategy—can remain protected even if some of its fruits (the final outputs) must be disclosed. Id.

 

Concord Music and Prakash: Reinforcing the Fact/Opinion Dichotomy and Waiver Principles

 

Subsequent cases have reinforced the logic of Tremblay. In Concord Music Grp., Inc. v. Anthropic PBC, No. 24-cv-03811-EKL, 2025 U.S. Dist. LEXIS 99068 (N.D. Cal. May 23, 2025), the court, citing Tremblay, agreed that prompts and related settings could constitute attorney work product, see id. at 8. It denied the defendant’s broad request to compel all of the plaintiffs’ undisclosed prompts and outputs, finding the request overbroad and not “closely tailored to the needs of the opposing party.” Id. at 10. This decision underscores that a “sword and shield” waiver argument requires a specific nexus between the material disclosed in a pleading and the undisclosed research being sought; it is not a key to unlock all of counsel’s research files.


While not an AI case, SEC v. Vidul Prakash, No. 23-cv-03300-BLF(SVK), 2025 U.S. Dist. LEXIS 42112 (N.D. Cal. Mar. 7, 2025), provides a powerful analogue for the procedure courts are likely to employ in these disputes. Faced with a request for SEC staff interview notes, the court first determined the notes were work product. It then found that the defendant had shown a “substantial need” for the purely factual component of the notes because the witness could no longer recall the substance of the interview. Id. at *6. Rather than ordering wholesale production, the court ordered an in camera review to separate discoverable fact work product from protected opinion work product. Id. This is the mechanism litigators should anticipate courts will use to resolve disputes over AI prompts, carefully examining them to disentangle factual inputs from counsel’s protected mental impressions.

 

The Preservation Mandate: Judicial Expectations for AI Interaction Logs and the In re: OpenAI Copyright Litig. Order

 

Litigators must also be aware of a court’s intolerance for the destruction of potentially relevant evidence. In the multidistrict copyright litigation In re: OpenAI, Inc. Copyright Litig., No. 23-cv-11195 (S.D.N.Y. May 13, 2025), the court issued an order directing OpenAI to “preserve and segregate all output log data that would otherwise be deleted on a going forward basis.” Id. at 2. This order was issued despite OpenAI’s arguments regarding user deletion requests and data privacy regulations. See id. at 1-2. This decision signals clearly that from the moment litigation is reasonably anticipated, AI interaction logs must be treated as potentially discoverable evidence subject to a legal hold. The routine deletion of such data, whether by the user or the vendor, will not be excused.


Summary of Key Holdings and Implications from Recent Case Law

Case Name

Key Holding

Practical Implication for Litigators

Opposing Counsel’s Counterargument

Tremblay v. OpenAI (N.D. Cal. 2024)

Prompts reflecting counsel’s legal theories and mental impressions are protected opinion work product.

Frame prompts as strategic inquiries, not simple commands, to maximize protection.

Prompts are mere factual inputs and that disclosure of any output waives protection over all related inputs (citing the initial magistrate ruling).

Concord Music Grp. v. Anthropic (N.D. Cal. 2025)

Broad requests for all undisclosed prompts and outputs are overbroad. Waiver must be narrowly tailored to what was put “at issue.”

Argue against broad subject-matter waiver; produce only the specific prompt-output threads relied upon in pleadings or motions.

Allege that selective disclosure creates an unfair “sword and shield” scenario requiring production of all related materials to test the disclosed evidence.

SEC v. Vidul Prakash (N.D. Cal. 2025)

Courts may conduct in camera review to separate discoverable fact work product from protected opinion work product in attorney notes.

If forced to produce AI logs, request in camera review to redact opinion work product before disclosure.

Argue “substantial need” for fact-based prompts (e.g., “Summarize this document”) and that they cannot be obtained by other means.

In re: OpenAI, Inc. Copyright Litig., (S.D.N.Y. 2025)

AI vendors can be ordered to preserve user interaction logs that would otherwise be deleted, overriding default retention policies.

Assume all AI interactions are discoverable and subject to legal hold obligations. Maintain firm control over logs.

Issue preservation notices not only to the opposing party but also directly to their known AI vendors.

 

The Strategic Imperative: A Governance Framework for Law Firms

 

Given the significant risks, law firms cannot permit the ad hoc use of generative AI. A formal, firm-wide governance framework is a prerequisite for the ethical and defensible integration of these tools into legal practice. This framework must be built upon a clear understanding of the technological and contractual distinctions that separate high-risk tools from secure solutions.


The Platform Mandate: Establishing a Defensible “Contractual Safe Harbor”

 

The single most important decision a firm will make is the choice of AI platform. The distinction is a binary one between unacceptable risk and a defensible safe harbor.


The Unacceptable Risk of Consumer-Grade Tools

 

Free and standard-tier commercial AI tools (e.g., ChatGPT Plus, Gemini Standard, Claude Pro) are fundamentally incompatible with the duty of confidentiality. Their terms of service and privacy policies typically include provisions that are fatal to attorney-client privilege:


  • Use of Inputs for Model Training: Most consumer-grade tools reserve the right to use user inputs and conversations to train and improve their AI models.


  • Human Review: The terms often permit the vendor’s employees or contractors to read user conversations for quality control and moderation.


  • Broad Content Licenses: Users may be required to grant the vendor a broad, perpetual license to use, modify, and create derivative works from their content.


Using such a platform for any matter involving confidential client information constitutes a direct waiver of attorney-client privilege and is a clear violation of the duty of confidentiality under ABA Model Rule 1.6.


The Essential Safeguards of Enterprise Solutions

 

Enterprise-grade AI solutions provide a “contractual safe harbor” through legally binding agreements that are specifically designed to protect client confidentiality. Before any tool is approved for firm use, it must provide, at a minimum, the following non-negotiable safeguards:


  • Contractual Prohibition on Model Training: A clear, binding contractual term stating that the customer’s data (both inputs and outputs) will not be used to train the vendor’s AI models.


  • Data Processing Addendum (DPA): A formal DPA that governs the processing of personal and confidential data in compliance with relevant privacy laws.


  • Security Certification: Independent, third-party verification of security controls, such as SOC 2 Type 2 compliance.


  • Data Ownership: Contractual clauses that explicitly affirm the customer retains full ownership of all inputs and outputs.


  • Confidentiality and Encryption: Strong commitments to data confidentiality, including encryption of data both in transit and at rest.


This contractual framework is evolving. The next frontier will involve the use of powerful open-source models that can be deployed on a firm’s private servers or in a private cloud environment. This development will largely eliminate the third-party disclosure risk that currently dominates the privilege analysis. However, the risk will not disappear but will instead transform, shifting from vendor management to the firm’s own internal data security, access controls, and governance protocols for the private model. A forward-looking AI policy must anticipate and prepare for this shift.

 

Comparative Analysis of Consumer-Grade vs. Enterprise AI Solutions

Feature

Consumer-Grade Tool (e.g., ChatGPT Plus)

Enterprise Solution (e.g., ChatGPT Enterprise)

Privilege & Confidentiality Implication

Use of Inputs for Model Training

Yes, vendor reserves the right by default.

No, contractually prohibited.

Critical. Enterprise solutions prevent disclosure to the vendor for training purposes, preserving confidentiality.

Human Review of Conversations

Yes, permitted by terms of service.

No, access is restricted.

Critical. Enterprise solutions prevent human review, maintaining the reasonable expectation of privacy.

Data Ownership

Ambiguous; user grants vendor broad license.

Customer retains ownership of inputs and outputs.

Enterprise solutions ensure the firm and its client maintain control over their intellectual property and confidential data.

Security & Compliance 

Basic security features.

Enterprise-grade security, SOC 2 Type 2 compliance, formal DPAs.

Enterprise solutions provide verifiable, auditable security measures necessary to protect sensitive client data.

Overall Risk

High. Use with client data constitutes waiver of privilege and an ethical breach.

Low (when properly governed). Provides a “contractual safe harbor” that enables responsible use.

The platform choice is the single most important factor in mitigating risk.

 

The AI Use Policy: A Firm-Wide Directive on Permissible Use

 

Every firm must implement and enforce a formal, written AI Use Policy. This policy should, at a minimum:


  • Explicitly prohibit the use of any unapproved, consumer-grade AI tool for any work related to firm or client matters.


  • Maintain an approved “whitelist” of vetted enterprise solutions that meet the contractual safe harbor requirements.


  • Define clear use cases, distinguishing between low-risk activities (e.g., brainstorming legal theories with anonymized facts) and high-risk activities requiring greater scrutiny (e.g., uploading and analyzing client documents).


Vendor Due Diligence: A Checklist for a Defensible Vetting Process


Before adding any AI tool to the firm’s whitelist, a rigorous due diligence process must be completed and documented. This process must include:


  • Legal and IT Review: A thorough review of all terms of service, privacy policies, and security documentation by the firm’s legal counsel and IT security team.


  • Binding Enterprise Agreement: Execution of a master services agreement or enterprise-level contract that supersedes any standard click-through terms.


  • Security Verification: Independent verification of the vendor’s security credentials, including a review of their SOC 2 report and data encryption protocols.


Training and Supervision: Fulfilling Non-Delegable Ethical Duties


Written policies are insufficient without robust, mandatory training for all legal professionals. Training programs are essential to fulfilling the ethical duties of competence and supervision and must cover:


  • The critical distinction between consumer and enterprise platforms.


  • The fundamentals of attorney-client privilege and work product protection as they apply to AI.


  • The art of strategic prompting to maximize opinion work product protection.


  • The attorney’s non-delegable responsibility to independently verify the accuracy and appropriateness of all AI-generated outputs before use.


In the Trenches: Practical Protocols for the AI-Assisted Litigator


Beyond firm-wide governance, individual litigators must adopt specific practices to protect privileged communications and work product in their daily workflows.


The Art of the Strategic Prompt: Engineering for Opinion Work Product Protection


As established in Tremblay, the characterization of an AI interaction as fact or opinion work product hinges on how the prompt is framed. To maximize protection, litigators must transform their AI interactions from simple commands into strategic inquiries that are infused with their mental impressions. This involves the following:


  • Embedding Legal Theories: Frame requests around specific legal doctrines, theories of liability, or elements of a claim. For example, instead of prompting, “Summarize this deposition,” a strategic prompt would be, “[a]ssuming a theory of vicarious liability, analyze this deposition transcript for admissions of supervisory knowledge or willful blindness by the deponent.”


  • Using Hypotheticals: When exploring legal concepts or drafting arguments, use anonymized facts or hypotheticals rather than uploading raw confidential data, even to a secure platform. This data-light approach minimizes risk.


Client Communications and Informed Consent: Managing Expectations and Fulfilling Ethical Obligations


Transparent communication with clients regarding the use of AI is an ethical imperative. Best practices include:


  • Updating Engagement Letters: Engagement letters should be revised to include a disclosure that the firm may use approved, secure AI tools to enhance efficiency and effectiveness, while detailing the safeguards in place to protect client data.


  • Obtaining Informed Consent: Before using an AI tool in any manner that involves the processing of a client’s confidential information—even on a vetted enterprise platform—the lawyer must consult with the client, explain the risks and benefits in understandable terms, and obtain the client’s informed consent.


Managing the Discovery of AI Materials: From Pleadings and Privilege Logs to Protective Orders


Litigators must anticipate and prepare for discovery requests targeting their AI-assisted work. Tactical considerations include:


  • Avoiding “Quote-Stuffing”: Do not copy and paste large, unedited blocks of AI-generated text into pleadings or briefs. This practice risks a “sword and shield” or “at issue” waiver argument that could open the door to discovery of the underlying research. Instead, synthesize and incorporate the information in your own words.


  • Preparing Privilege Logs: Be prepared to defend the privileged nature of your AI research. On a privilege log, distinguish clearly between any disclosed final outputs and the protected iterative research process, describing the latter as “Attorney Opinion Work Product – Counsel’s AI-assisted legal research and analysis, including prompts reflecting mental impressions and legal theories.”


  • Utilizing Protective Orders: Actively seek a stipulation and order under Federal Rule of Evidence 502(d) at the outset of litigation. A Rule 502(d) order can provide broad protection against the waiver of privilege or work product in the event of an inadvertent disclosure of AI materials.

 

The Ethical Compass: Navigating ABA and State Bar Guidance


The use of generative AI does not occur in a regulatory vacuum. A lawyer’s existing ethical duties apply with full force to this new technology.


Applying the Canons: Competence, Confidentiality, and Supervision in the AI Context


  • Competence (ABA Model Rule 1.1): The duty to maintain competence requires lawyers to “keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” In the current environment, this creates a non-delegable duty to understand the fundamental mechanics of AI tools, including their privacy policies and terms of use, before employing them in practice. But the obverse is true as well: Ignoring the profound impact AI can have in terms of time savings, assistance with the formulation of strategy and argument, and the enhanced ability to spot issues and keep track of documents makes it incumbent upon practitioners to consider the impact of not using an AI assistant on a client's case


  • Confidentiality (ABA Model Rule 1.6): This is the paramount ethical consideration. ABA Formal Opinion 512, along with a growing consensus among state bars including California, Florida, New York, and Texas, makes clear that inputting confidential client information into an insecure, “open,” or self-learning AI system without client consent is a violation of the duty of confidentiality.


  • Supervision (ABA Model Rules 5.1 & 5.3): A lawyer must provide adequate supervision for the use of AI, treating it as a “nonlawyer assistant.” This means the lawyer is ultimately and personally responsible for the accuracy, validity, and ethical compliance of any work product generated or assisted by an AI tool. Relying blindly on an AI’s output without independent verification is a breach of this duty.


The duty of competence is rapidly becoming a double-edged sword. Currently, it prohibits the incompetent use of risky AI tools, creating potential liability for waiving privilege. However, as secure, enterprise-grade AI becomes the standard of care for certain tasks, such as analyzing massive document productions, the duty may evolve. A failure to use such a tool, resulting in a missed key document and an adverse outcome, could foreseeably be argued as a breach of the duty to competently leverage available technology for the client’s benefit. Competence is not merely about avoiding risk, but also about effectively deploying tools to achieve client objectives.


Billing and Fees: The Prohibition on Billing for Saved Time and the Recognition of AI Proficiency as a Compensable Skill


The ethics of billing for AI-assisted work require nuance. A lawyer may not bill a client by the hour for time that was saved by using AI; efficiency gains should accrue to the client’s benefit. However, this does not render the value of AI irrelevant to fees. Under ABA Model Rule 1.5, which governs the reasonableness of fees, one factor is the “skill requisite to perform the legal service properly.” The sophisticated and strategic use of AI to analyze complex issues, craft compelling arguments, and achieve superior results is itself a compensable legal skill. This expertise can and should be reflected when setting flat fees or other alternative fee arrangements that focus on the value delivered to the client, rather than merely the hours billed.


Conclusion: A Synthesis of First Principles for Navigating the New Technological Frontier


The legal profession can successfully leverage the transformative power of generative AI while preserving its most fundamental protections. This requires abandoning ad hoc experimentation in favor of a disciplined, principles-based approach that treats AI not as an infallible oracle, but as a powerful third-party service provider that demands rigorous professional judgment and oversight.


The path forward is governed by a clear set of non-negotiable principles:


  • Privilege Requires Absolute Confidentiality. This mandates the exclusive use of enterprise-grade, contractually protected AI platforms that prohibit the vendor from using client data for its own purposes. The use of consumer-grade tools with confidential client information is an unacceptable risk.


  • Work Product Protection is Maximized Through Strategic Framing. The Tremblay decision provides a clear lesson: thoughtful, theory-infused prompting that reflects counsel’s mental impressions is the key to creating protected opinion work product.


  • Ethical Duties are Non-Delegable. The core duties of competence, confidentiality, and supervision are paramount and apply with full force to all AI-assisted work. Lawyers are ultimately responsible for their work product, regardless of the tools used to create it.


By implementing robust governance frameworks, investing in training, and adhering to these strategic protocols, litigators can ensure that technology serves as a powerful asset, not a catastrophic liability, in their unwavering duty to protect client interests and uphold the integrity of the legal process.

 
 
 
SuperLawyers

This website is operated by Fazio | Micheletti LLP (FM). The information contained on this website constitutes a "communication," as that term is used in California Rule of Professional Conduct 7.1, to the extent that it is a message or offer made by or on behalf of a member of the California State Bar regarding a member or a law firm's availability for professional employment that is directed to any former, present, or prospective client.This website refers to companies and other entities that may or have been the subject of litigation FM is or was prosecuting, which are identified by name, logo, and/or trademark. FM is not affiliated with any of these entities and does not purport to speak or act on their behalf in any way. See the Disclaimer page of this website for information relating to this issue and other issues pertaining to communications set forth on this website.

© 2025 by Fazio | Micheletti LLP

bottom of page