Sample scenario
A 12-person UK accountancy firm using AI to screen client risk profiles, automate document review, and generate compliance summaries. Clients are UK and EU-based. No formal AI usage policy in place.
High Risk
Your business has significant AI compliance exposure
As an accountancy firm using AI to assist with client risk assessments and automated document processing, you fall under multiple overlapping regulatory frameworks. The EU AI Act classifies AI used in financial decisions as high-risk, requiring mandatory conformity assessments and human oversight obligations. UK GDPR Article 22 restricts automated decision-making about individuals without explicit safeguards. Combined with the absence of an AI usage policy, your current exposure carries potential fines exceeding £17.5 million. Immediate action is required on three fronts.
Regulations That Apply to You
EU AI Act — High-Risk AI Systems
AI used to assess creditworthiness, financial risk, or eligibility for financial services is explicitly listed as high-risk under Annex III.
Obligations: Conformity assessment before deployment, mandatory human oversight, detailed technical documentation, registration in EU database, post-market monitoring.
Max penalty: €35 million or 7% of global annual turnover
UK GDPR — Article 22 (Automated Decision-Making)
Where AI outputs materially influence decisions about identifiable individuals (clients, employees), Article 22 rights are triggered.
Obligations: Inform individuals of automated processing, provide meaningful human review on request, document the logic and significance of automated decisions.
Max penalty: £17.5 million or 4% of global annual turnover
FCA — Consumer Duty (PS22/9)
FCA-regulated firms using AI in client-facing assessments must evidence that AI outputs deliver good outcomes and do not cause foreseeable harm.
Obligations: Monitor AI outputs for bias and accuracy, document how AI supports consumer outcomes, maintain audit trail of AI-assisted decisions.
Enforcement action, fines, and potential authorisation review
UK Equality Act 2010
AI systems trained on historical data can embed and amplify bias against protected characteristics. As the deploying firm, you bear liability for discriminatory outcomes.
Obligations: Assess AI tools for discriminatory outputs before use, maintain records of bias testing, ensure human override is available.
Unlimited compensation in Employment Tribunal; civil court awards
ICAEW / ACCA Professional Standards
Both bodies have issued guidance requiring members to understand and document AI tools used in professional work, particularly where outputs affect client decisions.
Obligations: Document AI tools in engagement records, ensure professional judgement is applied to all AI outputs, inform clients when AI is material to advice.
Disciplinary proceedings, suspension, removal from register
Colorado AI Act (SB 21-169)
Applies only to insurers operating in Colorado, USA. Not applicable to UK-based accountancy firms without Colorado operations.
Canada AIDA (Artificial Intelligence and Data Act)
Currently in Bill C-27 — not yet in force. Applies to federally regulated activities in Canada. Not applicable without Canadian operations.
Compliance Gaps Identified
- No AI usage policy — staff have no documented guidance on which AI tools are approved, how outputs must be reviewed, or what data can be processed. This is the single biggest risk factor.
- No conformity assessment completed for EU AI Act high-risk classification — required before the AI tool is used with EU clients, not after.
- Client privacy notices do not disclose automated processing or Article 22 rights — creating immediate GDPR breach exposure on every client engagement where AI is used.
- No bias or accuracy audit of AI tools — FCA Consumer Duty and Equality Act both require evidence that AI outputs are fair and do not produce discriminatory results.
- No record of which client engagements involved AI — without this audit trail, you cannot demonstrate compliance in the event of an ICO investigation or FCA review.
Recommended Actions
- Within 30 days: Draft and publish an AI usage policy covering approved tools, data handling rules, mandatory human review checkpoints, and client disclosure obligations.
- Within 30 days: Update all client privacy notices and engagement letters to disclose AI use and Article 22 rights. This is a legal obligation, not optional.
- Within 60 days: Complete a Data Protection Impact Assessment (DPIA) for each AI tool used in client-facing work. The ICO requires this for high-risk processing.
- Within 60 days: Commission or conduct a bias audit of your AI risk-assessment tool. Document the results. This satisfies both FCA Consumer Duty and Equality Act obligations.
- Ongoing: Maintain an AI register — a log of which AI tools are in use, what data they process, who approved them, and when they were last reviewed. Required under EU AI Act Article 17.
Recommended for You
AI Compliance Package — UK Professional Services
Given your high-risk classification under the EU AI Act and immediate UK GDPR Article 22 exposure,
the AI Compliance Package covers everything you need: a bespoke AI usage policy, DPIA template,
updated privacy notices, staff guidance, and a 12-month compliance roadmap — all tailored to
FCA-regulated professional services.
View Package →
Get your own personalised report — free
7 questions. 30 seconds. A report specific to your sector, your AI tools, and your markets.
You'll also receive it by email so you can share it with your team.
Start Free Assessment →
You'll also receive it by email so you can share it with your team.