There is no federal AI law in the United States. There may not be one for years. In its absence, individual states are legislating independently — and they are doing it fast. As of April 2026, Nebraska, Maryland, and Maine have passed new AI-specific laws. California, Minnesota, Hawaii, Oklahoma, Connecticut, Louisiana, Missouri, and Tennessee all have bills moving through committees right now. For any business operating across state lines, employing remote workers in multiple states, or serving US clients from outside the country, this creates a compliance challenge with no central answer.
This is not speculative. These are laws that are in force or within weeks of becoming law. The businesses most at risk are those that assume the absence of a federal law means the absence of legal obligation. That assumption is no longer safe.
What is already law
Three states passed new AI legislation in the second week of April 2026 alone. Here is what each requires, and who it affects.
Nebraska — Conversational AI Safety Act
Nebraska’s Conversational AI Safety Act is now in force. It applies to any operator of a chatbot or conversational AI system interacting with Nebraska residents and has two core requirements. First, AI systems must disclose their non-human status to users. You cannot deploy a customer-facing AI without making clear it is AI. Second, AI systems cannot claim to provide mental health care, therapy, or counselling. This is targeted directly at the growing use of AI in wellness apps, employee assistance programmes, and mental health platforms — but the scope of “claiming to provide mental health care” is broad enough to create risk for any AI that handles sensitive personal conversations without clear boundaries. If you use a chatbot on your website, in your product, or as part of a client-facing service, and any of your users are in Nebraska, the Act applies to you.
Maryland — Algorithmic pricing legislation
Maryland has passed legislation targeting algorithmic price-setting. The bill addresses AI systems that dynamically adjust pricing — tools common in e-commerce, SaaS, and service businesses that use AI to optimise revenue. The law creates new restrictions on how algorithmic pricing can be used and requires transparency around automated price decisions. If your business uses AI-driven pricing tools and you have Maryland customers, this is in scope.
Maine — AI therapy restriction
Maine has enacted a law restricting therapy and psychotherapy services to licensed professionals only, and making explicit that AI-based services cannot legally hold themselves out as therapy or psychotherapy. Like Nebraska’s law, this is targeted at the mental health AI sector, but the language is wide enough to affect any AI tool that handles emotional support conversations or positions itself as a mental wellness resource.
What is moving through committees now
The bills currently in committee represent where US state AI law will be in the next three to twelve months. The sectors attracting the most legislative attention are employment, healthcare, and consumer-facing AI systems.
Employment and hiring AI — California and Minnesota
California has advanced two separate automated decision systems bills covering employment. These would regulate the use of AI in hiring, performance management, and employment decisions — requiring transparency, human review rights, and impact assessments before AI tools are deployed in employment contexts. Minnesota’s employment automation bill has moved through multiple committees and is on a similar trajectory. Both states have large business populations and extraterritorial reach: if you employ workers in California or Minnesota, these laws will apply regardless of where your company is incorporated or headquartered.
Healthcare AI — California, Louisiana, Minnesota, Missouri
California has three healthcare AI bills moving simultaneously, covering advertising restrictions and clinical use of AI tools. Louisiana is requiring verbal consent before AI transcription can be used in healthcare settings. Minnesota is advancing psychotherapy AI rules. Missouri is introducing mental health AI oversight requirements. For any business operating at the intersection of technology and health — HR platforms that include wellness features, insurance technology, occupational health services, or telehealth — this cluster of state laws represents a material compliance burden that is arriving quickly.
Chatbot disclosure — Hawaii, Oklahoma, Connecticut
Following Nebraska’s lead, Hawaii, Oklahoma, and Connecticut all have chatbot disclosure bills that have advanced through committees. The core requirement in each is consistent: AI-powered conversational systems must identify themselves as AI to users. Maine’s equivalent bill awaits Senate approval before an April deadline. If you operate a customer-facing AI in any of these states — which, for a national or international business, almost certainly means you do — chatbot disclosure is becoming a baseline legal requirement across the country.
Other categories — Missouri and Tennessee
Missouri has a bill advancing on disclosure requirements for AI-generated media — relevant for marketing agencies, content businesses, and any company producing AI-generated advertising or communications at scale. Tennessee has passed AI personhood legislation, which has broader implications for how AI-generated outputs are treated legally — including in contracts and intellectual property contexts.
The sectors facing the most immediate exposure
Across this legislation, four types of business have the most concentrated risk:
HR consultancies, recruiters, and employers using AI in hiring. California and Minnesota are both moving employment AI bills that will require documented processes, transparency notices, and human review rights for candidates. If you use any AI tool that scores, ranks, or influences hiring decisions and you have employees or job applicants in these states, you need to map that exposure now — not when the bills pass.
Businesses using customer-facing chatbots. Nebraska is already in force. Five more states are weeks away from passing equivalent disclosure requirements. If you deploy a chatbot on your website or in your product, the most cost-effective response is to implement AI disclosure universally rather than trying to geo-target compliance to individual states.
Health and wellness technology businesses. No sector is attracting more state-level AI legislation than healthcare and mental health. The combination of Nebraska, Maine, California, Louisiana, Minnesota, and Missouri creates a near-comprehensive patchwork of obligations for any AI tool that touches health, wellness, or emotional support. Many of these businesses have grown quickly on the assumption that AI wellness tools occupy a grey area. That grey area is rapidly being legislated away.
Canadian businesses with US clients, employees, or operations. Canada has its own federal framework in PIPEDA, and Quebec’s Law 25 introduced some of the strictest AI transparency obligations in North America. But Canadian businesses serving US customers or employing US-based remote workers are also in scope for state-level requirements. The cross-border picture — PIPEDA plus provincial law plus applicable US state laws — is the most complex compliance environment in the region, and the one most commonly under-mapped by growing businesses.
What the absence of a federal law actually means
The US Congress has not passed a comprehensive AI law. This is sometimes interpreted as meaning there is no legal framework to comply with. The reality is the opposite. The absence of federal pre-emption means state laws proliferate without a ceiling. A business operating nationally in the US does not face one AI law — it potentially faces fifty, each with different requirements, thresholds, and enforcement mechanisms.
The businesses navigating this most effectively are not waiting for a federal law to consolidate everything. They are treating AI governance as an internal capability rather than a compliance checkbox. That means an AI tool inventory, a documented acceptable use policy, and a process for assessing new tools before deployment — rather than retrofitting compliance after a law passes in a state where you already operate.
What to do right now
- Map where your customers and employees are. State AI laws are typically triggered by the location of the person affected, not the location of the business. If you have users, customers, or employees in Nebraska, California, Minnesota, Maryland, or Maine, those laws apply to you.
- Inventory every AI tool that interacts with people. Customer-facing chatbots, hiring tools, performance management software, pricing engines, and content generation tools all have specific regulatory exposure under current or pending state laws.
- Implement chatbot disclosure universally. Five states have passed or are passing chatbot disclosure requirements. The cost of implementing disclosure everywhere is far lower than the cost of geo-targeted compliance. Do it once, do it right.
- Prioritise employment AI. California and Minnesota are the two largest employment AI bills currently moving. If you use AI in hiring or performance management with workers in either state, this should be on your legal team’s radar immediately.
- Document everything. Every state AI law that has passed or is moving includes either transparency requirements, audit provisions, or human review rights. The businesses that are hardest hit at enforcement are those with no documentation of how their AI tools work or who reviewed deployment decisions.
Operating in the US market? Get your AI compliance position mapped.
We work with businesses across the US, Canada, and globally to build AI compliance frameworks that hold up across multiple jurisdictions. Our US AI Compliance Framework covers your tool inventory, state-law exposure mapping, chatbot disclosure requirements, and employment AI documentation obligations.
About the author: Scott Neve is the founder of Ops Intel, a Newcastle-based AI compliance and automation consultancy working with businesses across the UK, EU, US, and Canada. He specialises in practical AI compliance frameworks for professional services and growing SMBs. Learn more →