"There's no federal AI law" is the most dangerous sentence in US business right now. Because while Congress debates a unified federal framework, four major state AI compliance regimes are already live and enforceable — and in 2025, US state legislators introduced over 1,100 AI-related bills. 145 were enacted. Most businesses don't know any of them apply.
This guide breaks down what's actually in force, which businesses are in scope, what each law requires in plain English, and what a practical compliance position looks like in 2026.
The landscape: why there's no one answer
Unlike the EU's unified AI Act, US AI regulation is a patchwork of state laws — each with different scope, different requirements, and different enforcement bodies. That patchwork creates a compliance challenge that many businesses are unprepared for: you may be subject to obligations in multiple states simultaneously, even if your business is registered in just one.
The key principle: US state AI laws (like state privacy laws before them) apply based on where your customers and employees are, not where your business is incorporated. A New York company with California customers is subject to California law. A UK company with Illinois-based employees using AI in HR decisions is subject to Illinois law. Your state of registration is largely irrelevant.
The four live regimes — what each one actually requires
Illinois HB 3773 — Employment AI notification
This is the law most businesses are unknowingly non-compliant with. Illinois HB 3773 requires employers with 15 or more employees to provide written notice before using an AI system in any employment decision — including hiring, promotion, demotion, performance review, and termination.
The notice must:
- State that an AI system was used or may be used in the decision
- Describe what characteristics the AI evaluates
- Be provided before or at the time of the employment action
Crucially, this law applies to the location of the employee — not the employer. If you have a single employee based in Illinois, and you use AI in any aspect of their performance management or your hiring process for that role, HB 3773 applies to you. This law also intersects with the Illinois Artificial Intelligence Video Interview Act, which has required candidate consent and disclosure for AI-analysed video interviews since 2020.
Violations can be filed with the Illinois Department of Human Rights, which can investigate complaints and impose civil remedies.
Colorado AI Act (SB 205) — Consequential decisions
The Colorado AI Act is the most comprehensive US state AI law currently in force. It applies to developers and deployers of "high-risk AI systems" — defined as systems that make or substantially contribute to consequential decisions about individuals in the areas of:
- Employment or employment opportunities
- Education enrolment or educational opportunities
- Financial services (credit, insurance, lending)
- Healthcare services or access
- Housing
- Legal services
- Essential government services
For deployers (businesses using AI systems built by others), the Colorado Act requires:
- An annual impact assessment covering the AI system's intended purpose, known limitations, demographic data on training data, and steps taken to mitigate algorithmic discrimination
- Disclosure to any consumer whose rights are substantially affected by an AI decision, including notice that an AI was used
- Opt-out rights: consumers must be able to opt out of AI-assisted consequential decisions and request human review
- A policy for managing and correcting errors in the AI system
The Colorado Attorney General is the enforcement body. Civil penalties apply per violation per affected consumer, which can compound quickly for businesses making many AI-assisted decisions.
California — Multiple overlapping laws
California is the most complex jurisdiction because multiple AI laws are active simultaneously, alongside the CCPA and CPRA (the most comprehensive US consumer privacy framework):
Automated Decision-Making Technology (ADMT) regulations: These CPRA-based regulations give California consumers rights over automated decision-making that significantly affects them — including the right to opt out, the right to access information about how the AI works, and the right to appeal a decision made by an AI. These apply to businesses subject to the CCPA (broadly: businesses serving California consumers with revenue over $25m, processing data of 100k+ consumers, or deriving 50%+ revenue from selling personal data).
California AI Transparency Act (SB 942): This requires large AI providers to make detection tools available for AI-generated content. For deployers using these tools to produce content that California residents consume, appropriate disclosure is required.
AB 2013: Requires AI developers to publish training data documentation. While primarily a provider obligation, deployers using AI systems in regulated contexts should be able to evidence their provider's compliance.
In practice, the California obligations most commonly affecting SMBs are: ADMT opt-out rights for consumers subject to consequential AI decisions, and transparency disclosures in customer-facing AI tools.
Texas RAIGA (Responsible AI Governance Act)
Texas RAIGA prohibits intentional algorithmic discrimination — the use of AI to discriminate on the basis of race, sex, religion, national origin, disability, age, or familial status in consequential decisions. It covers employment, lending, housing, healthcare, education, and insurance.
Deployers are required to:
- Maintain an AI policy that addresses algorithmic discrimination risk
- Create and preserve audit trails for AI systems used in consequential decisions
- Conduct periodic assessments of AI systems for discriminatory patterns
- Provide consumer notice when AI is used in a decision that materially affects them
The Texas Attorney General can bring enforcement actions. Remedies include injunctive relief, civil penalties, and restitution.
The broader picture: privacy laws and AI
Beyond the four main state AI laws, 20 states have active consumer privacy laws that intersect with AI use. Most of these laws include provisions on automated decision-making and profiling — meaning businesses subject to CCPA (California), Virginia's VCDPA, the Colorado Privacy Act, Connecticut's CTDPA, and others may already have AI-related obligations under privacy law, even if no standalone AI law has passed in their state.
The common thread across all these laws: if your AI system makes decisions about consumers that have meaningful effects on their lives — creditworthiness, insurance pricing, hiring, content they're shown — and the AI processes their personal data, you almost certainly have some form of transparency, opt-out, or oversight obligation.
What if I'm in a state with no AI law?
This is the question most businesses ask — and it reveals the most common misconception. Your state of registration is almost irrelevant. What matters is where your employees and customers are.
A business registered in Florida with no Florida employees:
- Serves California consumers online → subject to California ADMT regulations
- Has a remote employee based in Illinois → subject to Illinois HB 3773 for any AI used in their performance management
- Sells insurance products to Colorado residents → subject to Colorado AI Act
This is why a multi-state compliance framework — one that maps your operations, customers, and employees against every relevant state's requirements — is more useful than a single-state policy.
Federal AI legislation: where things stand
As of early 2026, federal AI legislation is in active discussion in Congress. The most prominent proposal is the America AI Act, a bipartisan Senate bill aimed at establishing a unified federal AI governance framework that would preempt conflicting state laws.
The bill draws heavily from the governance principles already established in Colorado and California — impact assessments, transparency obligations, and anti-discrimination provisions. This is significant: businesses that build their compliance framework around Colorado and California principles now are well-positioned for the federal framework whenever it passes. They won't need to rebuild — they'll need to adjust.
For businesses waiting for federal legislation before acting: by the time federal law passes and comes into force, state laws will have been enforceable for years. Waiting is not a strategy.
What a US AI compliance framework needs to cover
A practical US AI compliance position for 2026 consists of layered documentation that addresses the key obligations across all active regimes:
- AI Acceptable Use Policy — your organisation's rules for how AI may and may not be used, covering all 50 states as a baseline
- Employee AI guidelines — specific guidance for staff on what AI tools are approved, how they should be used, and what decisions must involve human review
- State obligations assessment — a documented analysis of which state laws apply to your operations, customers, and employees, and what each requires
- Employment AI disclosure notices — Illinois-compliant written notices for any AI used in hiring, performance, or employment decisions
- Consumer-facing AI disclosures — transparency statements for California and Colorado customers who are subject to AI-assisted decisions
- Algorithmic discrimination prevention procedures — documentation of how you monitor and mitigate bias in AI systems, covering Texas RAIGA obligations
- Impact assessment templates — Colorado AI Act annual assessment structure, applicable to any high-risk AI use
- Incident and audit trail procedures — logging, review, and incident response for AI systems in consequential use
Who is actually at risk right now?
In our experience, the businesses most exposed in 2026 are:
- HR teams using AI for hiring — even basic CV screening tools create Illinois obligations the moment you have an Illinois-based employee or applicant. Most HR professionals don't know this.
- SaaS businesses with California customers — if your product makes decisions about California consumers using their data, ADMT opt-out and disclosure rights are likely already triggered.
- Financial services businesses using AI risk models — Colorado and Texas both target exactly this use case. Credit scoring, insurance pricing, and lending decisions using AI all carry specific obligations.
- UK and European businesses with US staff — remote work means many non-US businesses have US-based employees they've never thought about from a state law perspective.
What to do right now
Three steps you can take immediately:
- Map your exposure. List every state where you have employees and every state where you serve meaningful numbers of customers. Cross-reference against California, Colorado, Texas, and Illinois.
- Audit your AI tools. List every AI system you use and identify whether it touches employment decisions, consumer decisions, or financial decisions. If yes, it creates state law obligations.
- Build your documentation. Even a baseline Acceptable Use Policy and state obligations summary protects you and demonstrates good-faith compliance. For employment AI, write the Illinois notices now.
The compliance landscape will only get more complex as more states enact AI laws through 2026 and 2027. The businesses that build a scalable compliance framework now are the ones that adapt rather than scramble each time a new law passes.
Need your US AI compliance framework built?
We scope your multi-state obligations, build your documentation, and keep it current as new laws are enacted. Packages from £197 (~$250). Multi-state compliance from £797 (~$1,010).
See the US AI Compliance Packages →