Thank you to the IIA-Australia AI working group for the work on the below article.
What Internal Audit (IA) should ask about Artificial Intelligence (AI)
Artificial Intelligence (AI) needs to be a 2024 focus area for all Chief Audit Executives (CAEs). We know that AI is a rapidly changing field of technology. But it is the speed with which its application is permeating all areas of organisations that warrants the attention of internal audit (IA).
From off-the-shelf no-code solutions like Microsoft’s Copilot to bespoke use cases, internal audit needs to understand:
- What to ask, of whom.
- The organisation’s IT and Data governance maturity level ahead of adopting AI.
- How off-the-shelf no-code solutions (like Microsoft’s Copilot) may be used internally, and how the IA team will provide assurance related to the organisation’s and its third party providers’ AI activities.
- What GRC+assurance looks like in a bespoke build AI environment.
Welcome to the first in a four part series on practical AI guidance for IA, brought to you by the IIA Australia AI Working Group. In this guide, we start with the questions every CAE should know the answers to (and if not, who to ask. There’s also an AI acronym cheat sheet and our pick of the best online AI resources.
Questions
Questions to ask the Board
- Will our organisation remain competitive if we don’t use AI?
- What does the Board consider to be the top 3 opportunities for AI in our industry?
- What should our organisations risk appetite for AI be?
Questions to ask C-Suite
- Where does the organisation sit on the maturity continuum, and do we have an organisational AI Strategy? Who is (or will be) responsible for our AI strategy?
- What type of strategic investments should our organisation be making in AI?
- How ready are we to deploy AI in a way that is Sustainable (high compute costs may impact our ESG goals), Scalable (delivers real value) and Secure (governance, legal and ethical considerations)? What other concerns do you have?
Questions for IT
- How do we know who is using AI within our organisation and in what capacity?
- In relation to our relationships with third parties, how does the use of AI fit with existing contracts in terms of data security and our adherence to our cyber principles?
- Do we have adequate IT General Controls, Data Controls, Change Management and Software Development Lifecycle (SDLC) procedures around AI?
Questions I should ask myself as CAE
- Do we need to audit AI? If yes, do we have the appropriate skills, resources, tools and audit methodology to do this?
- Considering the Internal Audit function, how aligned are we with the organisation’s AI strategy/maturity and what are our capability gaps?
- How might we be able to use AI to enhance our internal audit effectiveness and efficiency?
Key AI acronyms
AI: Artificial intelligence is a field of computer science that focuses on creating intelligent machines capable of tasks requiring human intelligence
NLP: Natural language processing (the ability to read, capture information)
NLG: Natural language generation (the ability to write/ create)
NLU: Natural language understanding (the ability to understand)
LLM: Large language models are a category of foundation models trained on immense amounts of data making them capable of understanding and generating natural language and other types of content to perform a wide range of tasks
GenAI: Generative AI is a type of artificial intelligence that can create new content such as images, text, audio, or video based on the data it has been trained on, using components including large language models, transformer neural networks, and generative adversarial networks.
GLLM: Generalised Large Language models (for example. ChatGPT, Gemini, llama, Bard, etc)
Prompt engineering: the process of writing, refining and optimizing inputs, or “prompts,” to encourage GenAI systems to create specific, high-quality outputs.
Synthetic data: data that has been created artificially through computer simulation or that algorithms can generate to take the place of real-world data
RAG: Retrieval-Augmented Generation (RAG) is the process of optimizing the output of a large language model, so it references an authoritative knowledge base outside of its training data sources before generating a response.