Skip to main content
Good Security

ISO/IEC 42001:2023 Artificial Intelligence Management System

You shouldn't have to become an AI risk expert. We do that part

Usually becomes relevant when a customer, regulator, or board asks for proof the business is governing AI use the same way it governs any other source of risk.

This page helps when

  • A customer is asking how AI will be used with their data
  • The board wants AI governance before staff AI use becomes a public issue
  • Procurement is starting to reference ISO 42001 alignment in questionnaires

Best next move

Start with Assurance.

Use the scorecard for a fast benchmark, then move into a working session when this requirement is already affecting customers, insurers, procurement, or internal accountability.

Where This Starts To Hurt

The buyer moment that makes this rule urgent

The moment usually arrives when an enterprise buyer or board AI committee asks for the AI governance evidence pack before a vendor contract closes

ISO 42001 usually becomes relevant when AI adoption has outrun the business's governance, and a customer, regulator, or board member starts asking for structured answers where there previously were none. It is the first international management-system standard built specifically for artificial intelligence, released in 2023 as AI moved from experiment to contract-relevant risk.

The standard follows the same shape as ISO 27001 — defined scope, risk-based controls, documented decisions, internal review, and continual improvement — but applied to AI-specific risks like fairness, transparency, accountability, data quality, and human oversight. That makes it the clearest framework available for showing AI is being governed as a managed function rather than something staff experiment with at their own pace.

For NZ businesses, ISO 42001 is less about chasing the certificate and more about having a defensible answer when a customer, regulator, or board member asks "how are you governing AI?" The answer usually moves from "we have a policy" to "we have a system that owns it, measures it, and reviews it."

What Starts Breaking

What stalls: deals, audits, or insurer renewals

ISO 42001 matters because AI governance has moved from theoretical to contractual. Large customers already expect suppliers handling their data with AI to demonstrate how fairness, privacy, accuracy, and human oversight are managed. Without a structured answer, AI adoption becomes a commercial liability instead of a commercial advantage.

It also matters because internal AI use is rarely as controlled as leadership assumes. Staff use AI tools for customer-facing work, hiring decisions, content creation, and analysis well before any formal approval exists. ISO 42001 is a practical way to bring that adoption back under governance without banning the tools outright.

For NZ organisations, ISO 42001 is increasingly the benchmark international buyers, enterprise customers, and regulators point to when they want proof that AI is being used responsibly. It is also aligned with the direction NZ and Australian regulators are signalling, so aligning early avoids retrofitting later.

What You Will Need To Prove

The first controls, owners, and evidence to put in place

AI system inventory, AI impact assessment, and human-oversight controls carry first audit weight — they sit under the AI management system the standard certifies

See the main requirements
01

AI System Inventory

A documented view of every AI system and tool the business is using, developing, or embedding — including informal AI use staff have adopted without approval — with a clear risk classification for each.

02

AI Risk and Impact Assessment

Structured risk assessment for each AI use case, covering fairness, transparency, privacy, safety, accuracy, and accountability. This is the evidence customers and regulators actually want to see.

03

Governance Structure and Ownership

Named accountability for AI decisions at leadership level, with a working oversight body that makes approval calls on new AI use, not a signoff that sits unread.

04

Policy and Acceptable Use

Tailored AI use policies covering what staff can and cannot do with AI tools, how customer data can be exposed to AI systems, and what requires pre-approval.

05

Human Oversight and Review

Controls that keep humans in the loop where AI decisions affect people, customers, or regulated outcomes. Includes review cycles, appeal processes, and documented decision trails.

06

Monitoring and Continual Improvement

Ongoing measurement of AI performance, bias drift, and unintended outcomes, with a review cadence that adjusts use or retires systems when evidence requires it.

Questions Before A Decision

The questions that come up before the contract

Do we need ISO 42001 if we do not build AI ourselves? +

Usually yes. ISO 42001 covers AI use as well as AI development. Any business using AI tools that touch customer data, decisions, or outputs is expected to have governance in place. That includes widely-adopted tools like ChatGPT, Copilot, and embedded AI features in SaaS products the business already pays for.

How does ISO 42001 differ from ISO 27001? +

ISO 27001 is about information security management; ISO 42001 is about AI management. They share the same shape and roughly 30% of underlying controls, so work done for ISO 27001 accelerates ISO 42001 progress. The new material in ISO 42001 focuses on AI-specific risks — fairness, transparency, human oversight — that ISO 27001 does not cover.

What is the first step toward ISO 42001? +

Start with an AI system inventory. Most businesses do not have a clear view of which AI tools are already in use, which data those tools touch, and who approved them. The inventory is the foundation for everything else and usually surfaces decisions leadership needs to make before the rest of the standard is meaningful.

Is ISO 42001 certification mandatory? +

Not yet, but buyer pressure is moving fast. Large international customers, government buyers, and financial services procurement teams are already asking for ISO 42001 alignment or certification. Aligning to the standard now makes the business defensible whether it chooses to certify formally or simply uses the alignment as buyer-facing evidence.

Need a clearer answer on ISO 42001?

A working session scopes the AI management system, maps the model inventory, and produces the first AI impact assessment pack a buyer can review