The question your audit committee is about to start asking

Audit committees are starting to ask a director-level assurance question about AI. Here is why ISO 42001 is the first thing that gives a sensible answer to it.

By Paul Kennedy 3 min read

There is a particular question that has started turning up in audit committee papers and non-executive briefings, and it’s worth getting ahead of. The question is some version of this: how do we know the AI tools we are using, and the AI tools our suppliers are using, are not going to embarrass us?

This is a different question from the ones most companies have been preparing to answer. It’s not “are we compliant with the EU AI Act,” although that matters. It’s not “do we have an AI policy,” although you need one. It is a question about director-level assurance, and it is coming from people whose job is to ask uncomfortable questions on behalf of shareholders and the board.

ISO 42001 matters here because it is the first thing that gives a sensible answer to it.

From “trust us” to “here is the evidence”

Auditors and non-executives have been living with management systems standards for years. ISO 27001 has been their reference point for cyber assurance since long before most boards could spell ransomware. Not because the certificate itself proves security - it doesn’t, and any serious auditor will tell you so - but because the certification process forces the organisation to demonstrate, year after year, that it has thought about the right things and built controls around them. The conversation moves from “trust us” to “here is the evidence.”

AI has been missing that grammar. Most boards I speak to have an AI policy approved at some point in 2024, an AI use case register that may or may not be current, and a comforting sense that the risk team is on it. When the audit committee asks how AI risk is being managed, the answer tends to involve a lot of nouns - policy, training, governance committee, framework - and very few verbs.

ISO 42001 changes that conversation. It introduces specific, auditable obligations:

  • A documented AI policy approved at leadership level
  • An AI risk assessment process
  • An AI system impact assessment (the part most likely to surprise people the first time they meet it)
  • Controls covering everything from data quality and labelling to third-party AI services to post-deployment monitoring and incident response

These produce evidence. They give an audit committee something to look at other than a slide deck.

Two questions for the next meeting

The smartest move for an audit committee right now is, I think, to put two short questions on the agenda for the next meeting:

  1. What is our current AI inventory and risk picture, told in plain language?
  2. What is our plan, with a date attached, for getting to a certifiable AI management system?

Neither question requires technical depth to ask. Both will produce useful answers, or, more usefully, reveal the absence of useful answers.

For executive teams, the implication is uncomfortable but not complicated. The audit committee question is coming. You have a choice between getting ahead of it - which gives you twelve to eighteen months to do this properly and turn it into a commercial advantage - or scrambling to answer it on someone else’s timetable. The first option is meaningfully cheaper and produces a much better outcome. It is also, in my experience, the option fewer companies are taking, which is precisely why doing it now is worth something.

Share: Twitter LinkedIn

Related posts