Someone on your team found an AI tool that summarizes client meeting notes in seconds. They started using it last Tuesday. Client names, portfolio details, and tax information are already in the system. You had no idea.
This scenario is playing out inside small professional services firms every week. The tools are impressive. The vetting process often does not exist yet.
The Problem: AI Adoption Is Outpacing Your Risk Controls
AI productivity tools are arriving inside professional services firms faster than any compliance framework anticipated. An associate at a CPA firm uses an AI writing assistant to draft client memos. A paralegal runs a contract through a free AI summarizer. A financial planner pastes meeting notes into a chatbot to generate follow-up emails.
None of these people mean any harm. They are trying to do their jobs better. But each of these actions potentially exposes non-public personal information (NPI), protected health information, or attorney-client privileged communications to a third-party system with unknown data handling practices.
The core problem is not the AI tool itself. It is the absence of a vetting process. Most small firms do not have a formal software approval workflow, and most AI vendors write their terms of service in ways that obscure what actually happens to your input data.
“We don’t store your data” sounds reassuring. But storage is not the only risk. A vendor can process your data, route it through subprocessors, and use it to improve their model, all without ever storing it in the traditional sense. If you are reading the terms literally, you might miss that entirely.
The gap between “we don’t store it” and “we don’t use it” is where regulatory exposure lives for your firm.
Why This Matters for Financial, CPA, and Legal Firms
Your firm operates under a specific set of legal obligations that make AI vendor vetting non-optional.
For financial advisory and investment management firms, the Securities and Exchange Commission (SEC) has been explicit that firms must apply existing recordkeeping, privacy, and supervisory obligations to AI-assisted workflows. Regulation S-P requires safeguarding client NPI and notifying clients of privacy practices. If an AI vendor’s system constitutes a “disclosure” of NPI to a third party, your Notice of Privacy Practices and your vendor agreements must address it.
For CPA firms, the American Institute of CPAs (AICPA) Code of Professional Conduct Section 1.700 governs client confidentiality. Using an AI tool that ingests client financial data without a proper engagement agreement with that vendor, and without client disclosure, can put your license at risk. The IRS also requires safeguards for taxpayer data under Revenue Procedure 2007-40 and the IRS Publication 4557 safeguarding guidelines.
For law firms, attorney-client privilege is not a technicality. Bar rules in California, New York, and most jurisdictions require competent supervision of technology used in client matters. The American Bar Association (ABA) Formal Opinion 498 affirms that attorneys must act competently when using technology, which includes understanding how tools handle client communications.
In all three contexts, a data breach or unauthorized disclosure caused by an AI vendor’s practices could mean regulatory fines, malpractice exposure, and the kind of client trust damage that small firms do not recover from easily.
How to Evaluate an AI Tool Before You Approve It
This is a practical checklist. You do not need to be a technologist to work through it.
1. Read the Vendor’s Terms of Service for Training Data Clauses
Search the Terms of Service and Privacy Policy for the following phrases: “improve our services,” “train our models,” “aggregate data,” and “usage data.” Vendors often include language that permits them to use your inputs to improve their AI models. This is the single most important clause to find and understand.
What you want to see: an explicit statement that your data is not used to train or fine-tune any AI model, ever. Many enterprise-tier AI tools offer this as an opt-out or as a default in their paid plans. If it is not stated clearly, assume the opposite.
2. Ask About Data Residency
Where is your data processed and stored? For firms with clients in California, the California Consumer Privacy Act (CCPA) creates specific obligations around cross-border data transfers. Ask vendors: “In which countries are your servers located?” and “Are any subprocessors located outside the United States?” A vendor who cannot answer these questions directly is a vendor you should not be trusting with client data.
3. Request a List of Subprocessors
A vendor can have strong internal data practices and still route your data through a subprocessor with weaker ones. Ask for the vendor’s current list of subprocessors and their data handling obligations. Legitimate vendors publish this list and update it regularly. GDPR-compliant vendors are required to do this by law, which is a useful proxy even if your clients are not in the EU.
4. Confirm SOC 2 Type II Compliance
A System and Organization Controls 2 (SOC 2) Type II audit is the minimum credibility bar for any AI tool that will handle sensitive client information. Unlike Type I (which evaluates controls at a single point in time), Type II evaluates whether those controls actually operated effectively over a period, typically six to twelve months.
Request the vendor’s SOC 2 Type II report and look specifically at the Trust Services Criteria for Security and Confidentiality. You do not need to read every page. Focus on the auditor’s opinion and any exceptions noted. A vendor that refuses to share this report or does not have one yet is not ready to handle your client data.
5. Confirm Breach Notification Obligations
Ask the vendor: “What are your contractual obligations to notify us in the event of a security incident involving our data, and within what timeframe?” California law requires notification within 72 hours in many circumstances under the California Consumer Privacy Act and related statutes. If the vendor’s answer is vague, that is a red flag. Get the breach notification terms in writing as part of your vendor agreement.
6. Build a Simple Internal Approval Workflow
Create a one-page internal process that requires any new AI tool to be reviewed before a team member uses it with client data. The workflow does not need to be complex:
- Employee identifies a new tool and submits a brief description to the firm’s designated reviewer (this could be a managing partner, your IT provider, or a compliance officer)
- Reviewer completes the checklist above
- Tool is approved with documented conditions, or rejected with a brief explanation
- Approved tools are added to a running list accessible to all staff
This process takes less than 30 minutes per tool evaluation. It is far less costly than the alternative.
What to Look for in an IT Partner
If you are bringing in an IT provider or a virtual Chief Information Security Officer (vCISO) to help with this process, ask them directly:
- Do you have experience reviewing AI vendor agreements for firms subject to Regulation S-P, AICPA confidentiality rules, or state bar ethics requirements?
- Can you help us build an AI tool approval workflow that fits a firm our size?
- Do you maintain a pre-vetted list of AI tools appropriate for professional services firms?
- How do you stay current on AI vendor policy changes, since these terms change frequently?
A provider who gives you vague answers or defaults to generic cybersecurity frameworks without acknowledging your industry’s specific regulatory context is probably not the right fit for your firm.
The Bottom Line
AI tools can genuinely help your firm work more efficiently. But using them carelessly with client data creates regulatory, legal, and reputational risk your firm cannot afford. A structured vendor evaluation process, built around four key questions, a SOC 2 Type II requirement, and a simple internal approval workflow, is how you stay ahead of the problem before it becomes a crisis.
Frequently Asked Questions
Is it okay to use ChatGPT or other consumer AI tools with client financial data?
Free and consumer-tier versions of most AI tools, including ChatGPT’s free plan, explicitly reserve the right to use your inputs to improve their models. This means client data you paste into these tools may be used for AI training. For professional services firms with confidentiality and privacy obligations, this is generally not acceptable without a signed data processing agreement and an explicit opt-out from model training. OpenAI and others do offer enterprise plans with stronger protections, but those must be configured and documented properly.
What is a SOC 2 Type II report and how do I get one from a vendor?
A SOC 2 Type II report is an independent audit confirming that a technology vendor’s data security controls were effectively operating over a defined period, usually six to twelve months. To get one, simply email the vendor’s sales or security team and ask: “Can you provide your most recent SOC 2 Type II report under NDA?” Reputable vendors fulfill this request routinely. The report itself will include the auditor’s opinion and a description of the controls tested.
What does ‘data residency’ mean and why should my firm care about it?
Data residency refers to the physical or legal location where your data is stored and processed. It matters because data stored outside the United States may be subject to foreign laws that require disclosure to foreign governments, and it can complicate your firm’s obligations under U.S. privacy statutes like the CCPA. Some regulatory frameworks, including certain SEC requirements, expect firms to know where their client data sits and to have contractual controls over it.
How often should my firm re-evaluate AI tools it has already approved?
At minimum, review approved tools annually, or whenever a vendor updates its Terms of Service or Privacy Policy. Many AI vendors push policy changes with minimal notice. Set a calendar reminder to check each approved vendor’s policy page quarterly. A good IT provider can help automate this monitoring so changes do not slip through undetected.
One82 provides managed IT, cybersecurity, compliance, and AI integration services exclusively for professional services firms in the San Francisco Bay Area. Schedule a 15-Minute Discovery Call to discuss your firm’s AI tools client data security posture.