← Insights / Compliance

Five Critical AI Compliance Changes for Professional Services in 2026: What the ICO's New Code of Practice Means for Your Firm

The UK's AI regulatory landscape has shifted decisively in the first half of 2026. Two new pieces of legislation are now in force, enforcement fines have reached record levels, and the courts have handed down rulings that carry direct consequences for solicitors, accountants, HR consultancies, and m

Compliance 15 May 2026 6 min read

Five Critical AI Compliance Changes for Professional Services in 2026: What the ICO's New Code of Practice Means for Your Firm

The UK's AI regulatory landscape has shifted decisively in the first half of 2026. Two new pieces of legislation are now in force, enforcement fines have reached record levels, and the courts have handed down rulings that carry direct consequences for solicitors, accountants, HR consultancies, and marketing agencies alike. This is not a moment for a watching brief. It is a moment for structured action.

Here are the five changes your firm needs to understand — and what you should be doing about each one.


1. The Data (Use and Access) Act 2025 Has Reshuffled the Rules on Automated Decision-Making

The core data protection provisions of the Data (Use and Access) Act 2025 (DUAA) came into force on 5 February 2026. The headline change is a significant liberalisation of the UK's Automated Decision-Making (ADM) regime. Organisations may now conduct broader automated processing for non-sensitive data without meeting the previously stricter threshold requirements.

However, this is not deregulation. The Act introduces mandatory safeguards in return. Chief among them is the right to "meaningful human involvement" — a phrase that carries real legal weight. Firms deploying AI in recruitment screening, client risk scoring, HR decisions, or any process that produces a consequential output about an individual must operationalise this right immediately. The ICO has been explicit: a human reviewer must hold genuine authority to alter or overturn an AI's output. A colleague who glances at a recommendation and waves it through does not satisfy the standard. Rubber-stamping is not oversight.

If your firm has not audited its ADM workflows since February, that audit is overdue.


On 12 May 2026, the Data Protection Act 2018 (Code of Practice on Artificial Intelligence and Automated Decision-Making) Regulations 2026 came into force. This statutory instrument does something consequential: it legally compels the Information Commissioner to prepare and publish a binding Code of Practice on AI and ADM. This is no longer voluntary guidance that firms can treat as aspirational. Once issued, the Code will carry statutory authority.

The ICO is currently running an active consultation on updated ADM guidance, which closes on 29 May 2026. That consultation is a preview of what the binding Code will contain. Firms would be well advised to read it now rather than scrambling to comply once the Code is finalised.

The practical preparation required is clear: review your Data Protection Impact Assessments (DPIAs), particularly for any high-risk processing or processing that involves children's data. Your DPIAs should already reflect the DUAA safeguards. If they do not, close that gap before the Code arrives.


3. Enforcement Is Now Targeting the Entire AI Supply Chain — Including Your Vendors

Professional services firms frequently deploy AI through third-party platforms — case management tools, recruitment software, marketing automation, HR analytics. Many assume that data protection liability rests primarily with the data controller, i.e., themselves. Recent enforcement has complicated that assumption, but it has also introduced a sharper risk from the other direction.

The ICO's £3.07 million fine against Advanced Computer Software established that data processors face direct financial liability for cybersecurity failings. The implication for firms is straightforward: your AI vendors can be held accountable, but so can you for selecting and retaining vendors with inadequate security controls. Contract terms that allocate liability clearly are no longer a nice-to-have negotiating point. They are a baseline expectation.

In the same period, the ICO issued a record £14.47 million fine to Reddit and a £247,590 penalty to Imgur for serious children's privacy failures. The ruling confirmed that "self-declaration" age gates are legally insufficient. If any AI tool your firm uses relies on self-reported age data for access controls, that approach will not withstand regulatory scrutiny.

Conduct a formal review of your third-party AI vendors. Require evidence of appropriate technical and organisational security measures. Ensure data processing agreements allocate liability with precision.


4. The Courts Are Sanctioning Professionals Who Misuse AI — and the Consequences Are Severe

Alongside regulatory enforcement, the courts have been issuing their own corrective signals. Following cases in which lawyers submitted AI-generated citations that turned out not to exist — including the widely-noted Ayinde v Haringey matter — the judiciary has made its position unambiguous. Submitting unverified AI outputs in legal or professional proceedings can result in sanctions for professional negligence and wasted costs orders.

This is not a problem confined to law firms. Accountants producing AI-assisted reports for submission to HMRC or clients, HR consultancies generating AI-drafted policies, and marketing agencies writing AI-assisted regulatory copy all face equivalent risks if human verification is treated as optional rather than essential.

The standard required is a genuine human-in-the-loop verification process. Someone with relevant professional knowledge must review AI outputs before they leave your firm. That review must be documented. The reviewer must be capable of identifying errors — which means they must understand what the AI was asked to do and what a correct output looks like.

Build that process into your workflows now, before an error reaches a client or a court.


5. New Criminal Liability and the CMA's Focus on AI-Washing Add Further Exposure

Two further developments deserve attention, particularly for marketing agencies and any firm that publicly promotes its AI capabilities.

From 6 February 2026, the DUAA introduced a new criminal offence for the creation of non-consensual intimate deepfake images. The ICO has already opened a formal investigation into Grok AI concerning the generation of non-consensual sexualised imagery involving real individuals. Firms operating in content creation, digital marketing, or any adjacent field must ensure their AI tools and acceptable use policies explicitly prohibit this category of output.

Separately, the Competition and Markets Authority (CMA) has marked the one-year anniversary of its direct consumer enforcement powers by actively pursuing unsubstantiated "AI-washing" marketing claims. If your firm's website, proposals, or client communications describe AI capabilities that cannot be demonstrated, or frame AI-assisted outputs as something more sophisticated than they are, you are exposed. The CMA is looking for exactly that kind of misleading framing.

Review your marketing materials. If a claim about your AI capabilities cannot be substantiated with evidence, remove it.


What This Means for Your Firm

The regulatory picture in 2026 is more demanding, more specific, and more actively enforced than it was twelve months ago. The obligations are not abstract. They touch procurement decisions, staffing arrangements, client communications, contractual terms, and the daily workflows of everyone in your firm who uses an AI tool.

The firms that will manage this well are those that treat compliance as a structured operational function rather than a periodic box-ticking exercise.

Ops Intel works with UK professional services firms to build practical AI compliance frameworks that meet current regulatory requirements and are designed to adapt as the rules evolve. Whether you need a DPIA review, a vendor assessment, ADM workflow documentation, or a comprehensive compliance audit, we can help you move from uncertainty to confidence.

Get in touch with Ops Intel today to discuss what your firm needs.

Work with Ops Intel

Need help navigating AI compliance?

We build AI compliance frameworks and automation systems for professional services firms worldwide. Book a free 30-minute call or email us directly.

Call Now Claim Your Free Audit