US attorneys are operating in the most active AI sanctions environment in the world. Federal courts have issued financial penalties and referrals to bar disciplinary committees against attorneys who submitted AI-generated filings containing fabricated case citations. Over thirty federal district courts now have standing orders requiring disclosure of AI use in filed documents. Multiple state bars — including New York, California, Florida, and Texas — have issued formal ethics opinions applying existing professional responsibility rules to AI use. There is no federal AI Act in the United States, but there is a dense, actively enforced patchwork of obligations that every US law firm must understand. This guide maps the full compliance landscape for US attorneys and law firms in 2026.
- You use any AI tool for legal research, document drafting, discovery review, or case analysis
- You use AI in client-facing communications, intake, or advice
- You file documents in any federal or state court — many now require AI disclosure
- You serve EU-based clients or counterparties — the EU AI Act applies to you regardless of where your firm is based
- You have California clients or employees — CCPA/CPRA applies to personal data processed by AI tools
ABA Model Rules: the foundation that already applies
The American Bar Association has not created a separate AI ethics rulebook. It has, however, been explicit that existing Model Rules govern AI use — particularly Rule 1.1 (Competence) and Rule 1.6 (Confidentiality).
Rule 1.1 — Competence
ABA Model Rule 1.1 requires that attorneys provide competent representation, defined as the legal knowledge, skill, thoroughness, and preparation reasonably necessary. Comment 8 to Rule 1.1 specifically addresses the duty to keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.
In practice, this means attorneys must understand how the AI tools they use work — specifically including the risk of hallucination, the training data limitations that may affect accuracy in specific legal domains, and the verification steps required before output is used in client matters or court filings. Using AI without this understanding is not competent representation.
Rule 1.6 — Confidentiality
Rule 1.6 prohibits attorneys from revealing information relating to the representation of a client without informed consent. Uploading client documents, case details, or strategy information into AI platforms that do not provide adequate confidentiality protections may constitute a violation of this rule — depending on the platform's data handling practices.
The ABA's Formal Opinion 512 (2023) confirmed that attorneys must review the terms of service and data handling practices of any AI tool used with client information. A tool that uses uploaded data to train its model, or that shares data with third parties, may not be compatible with Rule 1.6 obligations without explicit client informed consent.
State bar ethics opinions: what New York, California, and others require
Multiple state bars have issued formal ethics opinions on AI use in legal practice, and the pattern is consistent:
- New York State Bar Association (2024): Attorneys must understand the limitations of AI tools and must supervise AI-generated work product. Disclosure to clients of AI use is recommended where material. Confidentiality requires review of each AI tool's data handling practices before use with client information.
- State Bar of California (2023): Competence requires understanding of AI tool capabilities and limitations. Attorneys must review AI-generated work product for accuracy. Use of AI that processes client confidential information requires a thorough analysis of the tool's privacy policies and terms of service.
- Florida Bar, Texas Bar, DC Bar: All have applied existing competence and confidentiality rules to AI use, with consistent emphasis on verification of outputs, supervision of AI-assisted work, and client confidentiality review.
No state bar has concluded that AI use is inherently impermissible. All have concluded that the existing professional responsibility framework already governs it — and that practitioners who use AI without adequate oversight are exposing themselves to disciplinary action.
Building an AI compliance framework for your US firm?
Our AI compliance packages give professional services firms a documented AI policy, data processing review, and supervision framework. Fixed price, delivered in five to seven working days.
See the US Compliance Packages →Court sanctions: the enforcement pattern is established
Court sanctions for AI-related failures in US legal filings are no longer rare events — they have become a recognised enforcement pattern. The most prominent cases share the same structure: an attorney submitted a brief or motion citing cases that did not exist, generated by an AI tool and not verified against primary legal databases. Courts imposed monetary sanctions, required mandatory CLE on AI use, and in some cases referred matters to state bar disciplinary authorities.
The judicial response has been unambiguous: courts have no sympathy for the argument that the AI generated the error. The professional obligation to verify citations against primary sources before filing is fundamental, predates AI by decades, and is not modified by the method used to draft the document. The attorney signs the filing. The attorney is responsible.
Q1 2026 alone saw over $145,000 in AI-related court sanctions across jurisdictions. The pattern continues to grow as AI tool adoption accelerates without corresponding growth in verification processes.
District court standing orders: AI disclosure is becoming standard
Over thirty federal district courts have issued standing orders or local rules requiring attorneys to certify whether AI was used in preparing filings, and in some cases to describe how AI-generated content was verified. This is not optional disclosure — it is a court order. Failure to comply is a violation of local rules carrying sanctions independent of any substantive AI-related error.
Significant courts with AI disclosure requirements include the Northern District of Texas, the Eastern District of Texas, the District of Columbia, and multiple circuits with local rules amendments underway. Firms that practice in multiple jurisdictions need a systematic approach to tracking which courts require what — not a case-by-case scramble before filing.
Data privacy: CCPA, Illinois BIPA, and client data
US law firms using AI tools that process client personal data face state-level data privacy obligations that vary by client location:
- California (CCPA/CPRA): Clients who are California residents have rights over their personal data, including data processed by AI tools used in their legal matters. Service provider agreements with AI vendors must include the required CCPA contractual provisions. Penalties up to $7,500 per intentional violation.
- Illinois (BIPA): The Biometric Information Privacy Act applies where AI tools process biometric identifiers — voice recognition, facial recognition — in client intake or identification processes. Per-violation penalties and a private right of action.
- Illinois HB 3773 (in force January 2026): Applies to AI use in hiring, promotion, and performance review at the firm itself. Written notice to candidates and employees required — applies regardless of where the firm is based.
EU AI Act: US firms with EU clients are in scope
The EU AI Act applies to any organisation whose AI systems are used in the EU — regardless of where that organisation is based. US law firms that represent EU-based clients, EU counterparties, or whose AI tools assist with EU-jurisdictional matters may be in scope for high-risk classification if those tools meet the Annex III criteria for legal AI.
Full high-risk enforcement begins 2 August 2026. US firms with meaningful EU client exposure should not assume that operating from a US base provides a safe harbour. The Act's extraterritorial reach is the same as GDPR — it follows the location of the data subject and the market, not the location of the service provider.
What a compliant US law firm AI framework requires
- AI tool inventory and data handling review: Every AI tool documented, with confirmed review of data handling practices, model training terms, and sub-processor disclosure — against Rule 1.6 confidentiality obligations.
- Verification policy: A written policy specifying how AI-generated research, citations, and drafting output are verified before use in client matters or court filings. This must name responsible supervisors, not assume verification happens informally.
- Court AI disclosure tracker: A practice-wide system for tracking which courts have AI disclosure requirements, to ensure compliance across all active matters.
- State data privacy review: Assessment of CCPA, Illinois, and other applicable state requirements for client data processed by AI vendors — with appropriate contractual provisions in place.
- EU AI Act assessment: For firms with EU client exposure, a determination of which AI tools may trigger high-risk classification and what documentation is required before August 2026.
US Law Firm AI Compliance
AI policy, data handling review, verification framework, state bar compliance mapping. Fixed price, delivered in five to seven working days. Built for US law firms and attorneys.
See the US Compliance Packages →