ISO 42001 won't save you. Doing it properly might.

ISO 42001 is a useful instrument, but it is not a substitute for understanding what AI is doing inside your business. Here's how to tell which version of the programme you're actually running.

By Paul Kennedy 3 min read

Last year I sat across from a Chief Risk Officer who told me, with the conviction of a man who’d just been handed a board paper to read, that ISO 42001 was going to solve their AI problem. I asked him what their AI problem was. He paused for a long moment. Then he said, “We don’t really know. That’s why we need the standard.”

This is the wrong way round.

ISO 42001 is a useful instrument, but it is not a substitute for understanding what AI is doing inside your business. It’s a management system - which is to say, a structured way of asking and answering questions about what you’re doing and why. It will not tell you whether your customer-service chatbot is making promises you can’t honour. It will not tell you that the underwriting model your team licensed last quarter has a fairness problem nobody flagged. It will not tell you that three different teams are paying for the same generative AI tool through their own credit cards.

What it will do, if it’s implemented seriously, is build the muscle in your organisation that finds those things out and decides what to do about them.

Two versions of the same standard

There is a significant gap opening up between two kinds of ISO 42001 programme, and it’s worth being honest about which one you’re running.

The first is the certificate-on-the-wall version. A small team in compliance writes some policies, the engineering team is told to read them, an external auditor visits for a few days, everyone exhales, and the marketing team puts the logo on the website. The standard is treated as paperwork. In two years, when an AI incident actually happens - and it will - the certificate offers no protection, because the management system it represents was never wired into how decisions actually got made.

The second version takes the standard at its word. It treats the AI impact assessment process as the most important new piece of governance machinery the organisation has built in a decade. It uses the Annex A controls as a working diagnostic for what’s missing rather than a tick-list to be satisfied. It puts AI risk on the same standing agenda as financial risk and cyber risk, not as a special-interest item run by an enthusiastic head of innovation. And it gets the right people in the room, which is not usually the people who wrote the policy.

The difference between the two versions is mostly executive attention. ISO 42001 will not, in itself, generate that attention. The companies getting this right are the ones whose senior leadership has already decided it matters - usually because they’ve had a near-miss, lost a deal over a procurement question they couldn’t answer, or watched a competitor get caught out.

The real question for CEOs and CFOs

If you’re a CEO or CFO reading this, the question to be asking is not whether you should go for ISO 42001 certification. That decision is becoming a foregone conclusion. Your clients, your insurers, your regulators will push you there inside eighteen months. The question is whether, when you do it, you will do the version that is worth doing.

If you’re not sure, here is a test. Ask your CIO or your Head of Data three things:

  1. How many AI systems are operating in our business today?
  2. Who owns each one?
  3. What could go wrong with each?

If the answer is on a slide deck somewhere, you’re probably in reasonable shape. If the answer comes with a long pause and a promise to come back to you, you don’t have an ISO 42001 problem yet. You have a visibility problem. A standard won’t fix that on its own. People will, and so will the process the standard quietly forces you to build.

Share: Twitter LinkedIn

Related posts