An associate at a small litigation firm pastes a client’s deposition summary into ChatGPT to get help drafting a motion. It takes 30 seconds. Nobody flags it. But depending on the tool’s default settings and the firm’s subscription tier, that privileged communication may have just been shared with a third-party model that retains it for training purposes.

The Problem

Most attorneys adopting AI tools are focused on the obvious upside: faster research, better first drafts, more billable hours. What they are not doing, in most cases, is reading the data retention and training policies buried in the terms of service before they start using these tools with real client matters.

This isn’t a hypothetical gap. OpenAI, Google, and Microsoft each publish data handling policies for their consumer and enterprise products, and those policies are meaningfully different from each other. The default settings on consumer-tier tools like ChatGPT (free and Plus tiers) allow OpenAI to use conversation data to improve its models unless a user manually opts out. Gemini for personal Google accounts has similar defaults. Microsoft Copilot in its consumer form operates under Microsoft’s standard consumer privacy terms.

The problem isn’t that these companies are malicious. The problem is that attorneys are using tools built for general consumers without understanding that those tools were never designed with attorney-client privilege in mind.

A small firm may have five attorneys, all of whom have independently signed up for AI tools using personal or firm email addresses, each with different account configurations, none of which have been reviewed by anyone responsible for the firm’s confidentiality obligations. That is the reality in hundreds of small law firms today.

The practical gap here is straightforward: the attorneys know the rule (protect privileged communications), but they don’t know that the tool they just opened on their browser is configured to share what they type.

Why This Matters for Law Firms

Confidentiality isn’t just an ethical preference for attorneys. It is a core professional obligation under Model Rule of Professional Conduct 1.6, which prohibits revealing information relating to the representation of a client without informed consent. Every state bar has adopted a version of this rule.

What makes AI tools uniquely risky is not just the possibility of a data breach. It is the possibility of inadvertent waiver of attorney-client privilege. If privileged communications are transmitted to a third-party AI service under terms that allow that service to access, review, or use that content, an opposing party could argue that the privilege was waived because the communication was not kept confidential. Courts have not uniformly resolved this question yet, but the argument exists, and the risk is real.

Bar associations are beginning to weigh in. The California State Bar issued Formal Opinion 2023-204 addressing competence and confidentiality obligations when using generative AI. The Florida Bar, New York State Bar Association, and American Bar Association have all published guidance or are actively working on it. But these opinions are not uniform, and none of them establish a safe harbor. They consistently place the obligation on the attorney to understand the tools they use.

The ABA’s Model Rule 1.1 on competence, as interpreted through Comment 8, already requires attorneys to keep up with changes in technology relevant to their practice. Using an AI tool without understanding its data handling policies is, by that standard, arguably a competence failure.

For small firms without general counsel or a dedicated IT team, this creates a real compliance gap with meaningful professional risk.

How to Address This

The good news is that addressing this risk does not require a 40-page policy document. For most small law firms, three core rules can cover the majority of scenarios.

Rule 1: No client-identifiable information in consumer-tier AI tools.

Draw a hard line between tools your firm has vetted and provisioned at an enterprise level, and tools attorneys are using through personal or free accounts. Consumer versions of ChatGPT, Gemini, and Copilot are not appropriate for content that includes client names, matter details, case facts, or any information that could identify a client or a representation. Period.

Rule 2: Enterprise-tier tools require a data processing agreement.

Some AI tools offer contractual data isolation at the enterprise tier. Microsoft 365 Copilot, for example, operates under Microsoft’s commercial data processing terms, which state that customer data is not used to train foundation models. OpenAI’s ChatGPT Enterprise and its API (with zero data retention enabled) offer similar protections. Google Workspace customers using Gemini for Workspace are covered under Google’s enterprise terms. Before using any AI tool with client matter content, confirm in writing that the vendor does not retain or train on your data.

Rule 3: Document your AI tool inventory and review it quarterly.

Maintain a simple list of approved AI tools, the tier of subscription, the data handling terms applicable to that tier, and which practice areas or task types each tool may be used for. Review it quarterly. AI tools update their terms of service more frequently than most software vendors, and a policy that was accurate six months ago may not be accurate today.

Beyond these three rules, train your team. A one-hour internal session walking through why these rules exist, with a real example of how data flows through a consumer AI tool, will do more than a written policy that nobody reads.

What to Look for in an IT Partner

If your firm is working with a managed IT provider, or considering doing so, ask these specific questions before relying on their guidance for AI tool governance:

  • Can you show me the data processing terms for each AI tool you’re recommending or provisioning for our firm?
  • Have you worked with law firms specifically, and do you understand the attorney-client privilege implications of AI data handling?
  • How do you handle AI acceptable-use policy creation, and do you update it when vendor terms change?
  • What is your process for evaluating a new AI tool before we deploy it for client-facing work?

A provider who works exclusively or primarily with professional services firms will approach these questions differently than a generalist IT shop. The distinction between a consumer and enterprise AI subscription is not a technical nuance. It is a compliance decision, and your IT partner should be equipped to navigate it with you.

The Bottom Line

Attorneys are adopting AI tools quickly, and that is not a bad thing. But the default settings on consumer AI products were not designed with privilege obligations in mind. A small, practical AI acceptable-use policy, combined with enterprise-tier agreements that include data isolation commitments, gives your firm the benefits of these tools without the confidentiality exposure. You don’t have to wait for a national bar standard to act responsibly today.

Frequently Asked Questions

Does using ChatGPT with client information automatically waive attorney-client privilege?

Not automatically, but it creates a credible waiver argument that opposing counsel could raise. Privilege can be waived when confidential information is voluntarily disclosed to a third party without adequate confidentiality protections. Whether a court would find waiver depends on the jurisdiction, the specific terms of the AI service used, and the facts of the disclosure. The safest approach is to avoid the argument entirely by using only enterprise-tier tools with contractual data isolation.

What is the difference between ChatGPT Plus and ChatGPT Enterprise for law firm use?

ChatGPT Plus is a consumer subscription. By default, OpenAI may use conversation data to improve its models unless you manually disable chat history in settings. ChatGPT Enterprise is sold under separate commercial terms that include a contractual commitment that OpenAI will not use your data to train its models, and that your data is encrypted and isolated. For law firm use involving client matter content, only the Enterprise tier or the API with zero data retention enabled is appropriate.

Yes, several. California’s Formal Opinion 2023-204, issued by the State Bar’s Standing Committee on Professional Responsibility and Conduct, addresses both competence and confidentiality obligations when using generative AI. Florida, New York, and other states have issued similar guidance. These opinions consistently emphasize that attorneys must understand the tools they use and ensure that use complies with confidentiality rules, but they do not provide a uniform national standard. Attorneys should consult their state bar’s current guidance.

What tasks can a law firm use AI for without risking confidentiality exposure?

AI tools are appropriate for tasks that do not require inputting client-identifiable or privileged information. Examples include drafting template language, researching general legal standards, generating document outlines using hypothetical fact patterns, proofreading for grammar, and learning how a new area of law works. Any task that requires pasting in real client communications, case facts, or identifying information should be handled only through vetted, enterprise-tier tools with appropriate contractual protections.


One82 provides managed IT, cybersecurity, compliance, and AI integration services exclusively for professional services firms in the San Francisco Bay Area. Schedule a 15-Minute Discovery Call to discuss your firm’s AI acceptable-use posture.