ISO 42001 for accounting software users
AI Governance · ISO 42001 · Accounting Software
ISO 42001 for Accounting Software Users: Why Your Firm’s AI Governance Can’t Wait
The uncomfortable truth UK accounting firms still aren’t discussing:
Your Xero automation, Sage Copilot insights, and QuickBooks AI workflows are making decisions every day — but who is actually governing them?
If your practice uses AI-powered accounting software (and most do now), you are already operating an artificial intelligence management environment — whether you acknowledge it or not.
Your systems categorise transactions.
They flag anomalies.
They forecast cash flow.
They extract data from invoices and receipts.
Every one of those actions is driven by algorithms trained on historical data and probabilistic models. And those outputs influence client advice, compliance decisions, and professional judgement.
Yet governance has not kept pace with adoption.
Surveys regularly show very high AI usage among accountants — one widely cited figure claims 99% of accountants now use AI in some form — but governance maturity lags significantly behind usage. Across organisations more broadly, research consistently finds that while executives say they have “AI governance frameworks”, fewer than 25% report those frameworks are fully implemented and continuously reviewed.
That gap — between AI usage and AI governance — is exactly what ISO/IEC 42001 was created to address.
And for accounting firms, that gap represents professional, regulatory, and insurance risk.
What Exactly Is ISO 42001 (and Why Accountants Should Care)
ISO/IEC 42001:2023 — usually shortened to ISO 42001 — is the world’s first international standard for Artificial Intelligence Management Systems (AIMS).
It was published in December 2023 and provides a structured framework for:
- Governing AI across its full lifecycle
- Managing AI-related risks
- Assigning accountability and oversight
- Ensuring ethical, transparent, and lawful use of AI
Crucially, ISO 42001 is not a technical standard. It doesn’t tell you how to build AI models. It tells you how to govern AI use inside an organisation.
The easiest comparison is this:
ISO 27001 = how you protect information
ISO 42001 = how you control decisions made using AI
For accounting practices relying on platforms like Xero, Sage, QuickBooks, Dext, ApprovalMax, and AI-enabled audit tools, that distinction matters.
You may not be developing AI — but you are deploying it, feeding it data, and relying on its outputs.
ISO 42001 applies to organisations that use AI, not just those that build it.
The Accounting-Specific Risk: Professional Liability Has Moved
Here’s the question partners rarely ask out loud:
When an AI-assisted decision is wrong, who is professionally liable?
If an AI tool:
- Misclassifies revenue
- Flags the wrong transaction as suspicious
- Misses fraud
- Produces a misleading forecast
…the liability does not sit with “the algorithm”.
It sits with the firm that relied on it.
This is why accounting software vendors are moving first.
FloQast announced ISO 42001 certification in January 2025, explicitly framing it as a trust and governance issue in accounting.
BlackLine followed with certification in September 2025, emphasising AI control layers and governance assurance.
This isn’t box-ticking. It’s risk positioning.
Now consider this scenario:
Your firm uses AI-based anomaly detection as part of AML or audit procedures.
The model is trained on historical data.
It begins systematically flagging transactions from certain postcodes, sectors, or business profiles.
Without documented governance — including bias assessment, human review thresholds, and escalation procedures — you may be exposing clients to unfair outcomes and your firm to regulatory and reputational risk.
ISO 42001 doesn’t require you to access a vendor’s proprietary training data.
It does require you to document:
- How AI is used
- What data you provide to it
- How outputs are reviewed
- How risks (including bias) are assessed and managed
- How responsibility is assigned
For accounting firms, that’s not bureaucracy — it’s professional defensibility.
Why the Conversation Has Shifted: Governance Is Catching Up With Adoption
Back in 2024, most professional guidance focused on whether AI should be used at all.
The questions were basic:
- Is it secure?
- Does it leak data?
- Should staff be allowed to use ChatGPT?
That conversation is now outdated.
By mid-2025:
- The Financial Reporting Council (FRC) published detailed guidance on AI in audit, including documentation expectations for AI-enabled tools.
- ICAEW began publishing resources explicitly referencing ISO/IEC 42001 and BS ISO/IEC 42006 (the companion standard for auditing AI management systems).
- Professional bodies shifted from “AI awareness” to AI accountability.
This reflects a simple reality:
AI governance is no longer optional for professional services firms.
Clients, insurers, and regulators are now asking how AI is controlled, not whether it’s used.
What the Big Four Already Understand
Deloitte’s 2025 State of Generative AI in the Enterprise report found that 38% of organisations cite regulatory compliance as the top barrier to AI adoption, a significant year-on-year increase.
Deloitte, EY, and others are actively positioning ISO 42001 as:
- A way to stay ahead of regulatory pressure
- A trust signal to clients
- A framework that aligns AI use with legal and ethical expectations
This isn’t altruism. It’s competitive positioning.
As clients begin asking:
- “How do you ensure your AI outputs are unbiased?”
- “Can you explain how your AI-assisted conclusions were reached?”
- “What happens if your AI is wrong?”
Firms without documented governance will struggle to answer convincingly.
The Six Core Domains of ISO 42001 (Translated for Accounting Firms)
ISO 42001 is a management system standard, not a technical manual. In practical terms, it focuses on six governance areas.
1AI Risk Management
You must identify and manage risks across the AI lifecycle.
For accounting firms, this means documenting:
- Where AI is used in client workflows
- What could go wrong (misclassification, bias, drift, over-reliance)
- How errors are detected
- What happens when they occur
Example:
If AI categorises bank transactions, you define acceptable error rates, human review points, and incident handling when misclassification affects reporting.
2Transparency & Explainability
If AI influences client advice, you must be able to explain it.
Not in mathematical terms — but in professional ones.
If an AI-generated forecast or anomaly flag is challenged by a client, regulator, or court, your firm must be able to show:
- Why the output was relied upon
- How it was reviewed
- Who made the final decision
Explainability isn’t optional in accounting — it’s a professional obligation.
3Data Governance
AI governance fails without data governance.
ISO 42001 requires clarity on:
- What data is provided to AI systems
- Where it originates
- How quality is assessed
- How privacy and confidentiality are protected
For accounting firms handling sensitive financial and personal data, this directly supports GDPR compliance and reduces accidental breaches.
4Human Oversight
AI should not make consequential decisions in isolation.
ISO 42001 mandates:
- Clear roles and responsibilities
- Human-in-the-loop controls
- Escalation paths when AI outputs are uncertain or high-risk
Human review must be meaningful — not rubber-stamping.
5Regulatory Alignment
ISO 42001 helps map AI use to legal obligations, including:
- UK GDPR
- Professional standards
- Sector-specific regulations
- Emerging AI regulation
The EU AI Act, which entered into force in 2024, applies on a phased timetable through 2027. UK firms advising EU clients or using EU-developed AI tools may still be affected. ISO 42001 provides a recognised operational framework to demonstrate compliance.
6Continuous Improvement
AI systems change. Data drifts. Models update.
ISO 42001 requires ongoing monitoring of:
- Accuracy
- Bias indicators
- Performance degradation
- Oversight effectiveness
Governance is not a one-off exercise — it’s continuous.
The Questions ISO 42001 Forces Firms to Confront
Implementing AI governance surfaces uncomfortable but necessary questions:
- How do we know our AI isn’t biased?
- Who is accountable for AI decisions in our practice?
- What happens when AI outputs conflict with professional judgement?
- Could we explain our AI-assisted advice in court?
These are not hypothetical concerns. Litigation, regulatory scrutiny, and insurance assessments increasingly involve AI usage.
Without documented governance, firms are exposed.
Why “We Only Use Off-the-Shelf AI” Is Not a Defence
This is one of the most common objections — and one of the weakest.
ISO 42001 applies to users of AI, not just developers.
Even if you don’t build AI:
- You choose how it’s used
- You decide what data is fed into it
- You rely on its outputs
- You are accountable for outcomes
Governance still applies.
How ISO 42001 Fits With Existing Accounting Compliance
ISO 42001 does not replace your existing controls.
It complements them.
- ✓ISO 27001 protects information
- ✓Cyber Essentials hardens infrastructure
- ✓GDPR governs personal data
- ✓ISO 42001 governs AI-driven decisions
The management system structure is deliberately similar, making integrated implementation efficient.
The First-Mover Advantage (and Why It’s Temporary)
ISO 42001 is still relatively new. Most accounting firms haven’t engaged with it yet.
That creates a short window where early adopters can:
- Differentiate in tenders
- Reassure insurers
- Answer client AI questions confidently
But that advantage won’t last.
Cyber Essentials followed the same path — optional at first, then expected, then required.
AI governance is moving faster.
What Accounting Firms Should Do in the Next 90 Days
You don’t need certification tomorrow. But doing nothing is increasingly hard to justify.
Month 1
- Inventory AI use across the practice
- Identify governance gaps
- Review vendor contracts
Month 2
- Define AI ownership and oversight
- Draft AI usage and escalation policies
- Implement basic human review controls
Month 3
- Build a 12-month AI governance roadmap
- Decide on ISO 42001 readiness vs certification
- Budget and resource appropriately
Final Thought: The Risk Isn’t AI — It’s Ungoverned AI
AI is improving accounting.
The danger isn’t the technology.
The danger is using AI without documented oversight, without accountability, and without the ability to explain decisions when challenged.
ISO 42001 doesn’t stop innovation.
It makes AI defensible.
For accounting firms, trust is the product.
ISO 42001 turns “we’re careful with AI” into evidence.
Ready to Begin Your ISO 42001 Journey?
PPCS specialises in AI governance, ISO 27001, and Cyber Essentials for UK accounting firms.
Based in Fleet, Hampshire, we support practices across Surrey, Berkshire, and beyond with:
- ✓AI Readiness Assessments
- ✓Policy & Template Kits
- ✓Risk & Impact Assessments
- ✓Training Programmes
- ✓ISO 42001 Certification Support
Call 0775 679 79 55
Related PPCS Resources:
- AIMS 42001 – AI Management System for SMEs & Accountants
- Xero + AI: Why Accounting Firms Need ISO 42001 Governance
- The Hidden Cyber Risks Inside Every Accounting Firm
- 5 Common Cybersecurity Mistakes Accounting Firms Make
- ISO 27001 Readiness for Accounting Practices
- Cyber Essentials Certification
- Making Tax Digital Cybersecurity Requirements
- 5 Ways Accounting Firms Accidentally Breach GDPR
- Sage AI & Copilot Security Risks
- How to Choose Accounting Software for Your SME
