Mixed news for humans: Artificial Intelligence can be extremely helpful to internal auditors and boards in providing useful information to work off, but it doesn’t provide miracles of analysis.

This was one of many ideas to come out of the panel webcast discussion recorded in late March by the Institute of Internal Auditors in conjunction with the Governance Institute of Australia.

CSIRO Futurist and AI specialist Rob Hanson, EY director of financial services Charlie Puddicombe, and barrister and law lecturer Dr Philippa Ryan collectively dived into the topic and came up with a relatively equal mix of approbation and warnings about how AI can do a lot of the heavy lifting in providing information to both internal auditors and company boards, if managed well.

Mr Hanson agreed that some directors are “uncomfortable” with elements of new technology before conceding that a better word might be “terrified”, even though some AI has been around since the 1950s.

He noted that CSIRO’s recently created Data61 division, where he works, is running training courses to explain the benefits of AI.

Far from being asked simple questions, “I get asked questions that are too technical and too advanced,” he said, suggesting there’s a danger of misunderstanding the basics. “Not in public, but behind closed doors,’’ he added.

Mr Puddicombe focused on the advantages for internal auditors, whom he advises, noting that it’s the correct balance of human and artificial intelligence that gets the best results.

He noted that concepts such as machine learning, natural language processing and robotics can easily be harnessed to speed up the collection of data by both external and internal auditors.

“AI can augment the human,” he noted, adding that it was particularly useful in taking over repetitive tasks. It can also help those auditors by validating information, he said, and converting data into useful information on which decisions can be based.

Meanwhile, boards can be much more quickly shown important information requiring decisions, he said. But he noted it often required human input to actually analyse the information.

Another hot area in the discussion was biases affecting the value of information, starting with unconscious bias during the coding process.

It was noted that most coders were young males. Mr Puddicombe said there were more than 160 separate biases that had been identified as possibly influencing decision making by humans.

Dr Ryan said that boards need not be full of AI experts. They should be considered aggregators of information rather than aiming to have every member be multi-skilled, she said. “And if you want to eradicate these biases, have a really mixed board.”

She said she knew she had recently been appointed to two boards because of her knowledge of AI, including the automation of trust, blockchain technology and the accountability of algorithms.

But with that, she pointed out there is a huge range of AI going up to machines that teach themselves but starting with the likes of car windscreen wipers that turn on when it is raining.

The main point is to make sure you have the right domain in which AI can work most effectively, she said.

Picking an analogy she said ”if I had an autonomous vehicle, for instance, I wouldn’t operate it outside a school where there are five year olds because they behave unpredictably.”

“But it would be fine around the curtain of an airport where you have a lot of control.”

In a board situation, “don’t just buy the technology and assume it’s going to fit your business. And having bought it, that’s not the end of the discussion, it’s the beginning.”

Watch the free webcast

Watch now: Webcast: Artificial intelligence — Governance issues facing assurance providers and boards