UK Government’s AI Playbook - Core Principles for Audit and Risk Professionals

UK Government’s AI Playbook - Core Principles for Audit and Risk Professionals

Is your organisation truly prepared for the risks of Artificial Intelligence? 

If you’re unsure, you’re not alone.

Think of AI as a car built with cutting‐edge engineering, promising speed and innovation. However, without proper controls and maintenance, even the fastest car can become a road hazard. 

The 12th Annual Global Internal Audit Top Technology Risks Survey from Protiviti and The Institute of Internal Auditors Inc. reveals that 59% of audit leaders expect advanced AI systems to pose significant risks within the next two to three years. As AI continues to transform (and disrupt) industries, managing its risks is a challenge to all. The recently launched the Artificial Intelligence Playbook for the UK Government - which guides the safe, ethical and responsible use of AI - also offers valuable insights for internal audit and risk management professionals.


Why Does AI Governance Matter?

Article content
We all know that driving a car without a seat belt or brakes is a bad idea. While it can take you places quickly, without proper safety measures, you expose yourself to risks on every turn.

AI promises efficiency and innovation, but it also introduces new ethical, legal, and operational risks. The AI Playbook was created to help public sector teams harness AI’s power safely, effectively, and responsibly. It’s a guide to maximise AI’s benefits while minimising pitfalls. For internal auditors and risk managers, these same best practices can serve as a blueprint to evaluate and strengthen your organisation’s AI initiatives.

Before jumping into risks and controls, assurance professionals must grasp what AI can and cannot do. AI isn’t magic - it has strengths and critical weaknesses. The AI Playbook emphasises a cautious, informed approach: understand the tech, apply it lawfully and ethically, secure it against threats, keep humans in the loop, manage its entire lifecycle, and choose solutions wisely. Let’s unpack each of these core principles and see how they translate to effective AI risk management.


Principle 1 - Understanding AI and Its Limitations

Article content
A Victorian gentleman is encountering a car for the first time, marvelling at its ability to move without horses. Blown away by its smooth power and sleek design, he may imagine it fly or glide over water. The car however remains firmly bound to the road, its capabilities defined by solid engineering rather than fanciful dreams.

Successful AI governance starts with knowledge. Principle 1: “You know what AI is and what its limitations are.” In practice, this means ensuring stakeholders understand AI’s capabilities and its constraints. AI systems excel at pattern recognition and automation, but they lack true common sense or an understanding of the physical world. They can also get things wrong - there’s no guarantee AI outputs are accurate. Models can be tripped up by biased data, hallucinations or unusual scenarios.

For risk professionals, this principle is a reminder not to place blind trust in AI. Understand the tools and data behind your AI systems. Know that AI can amplify biases or make mistakes if not properly managed. For example, an AI-driven customer service chatbot might confidently give incorrect information if asked something outside its training data. An algorithmic loan approval system could inadvertently favour or exclude certain groups if historical bias lurks in its data. By educating management and audit teams on AI’s limits, you set realistic expectations and can design controls to catch errors. As the AI Playbook suggests, learn what AI can and cannot do, and put processes in place to test and validate its outputs. In short: knowledge is your first line of defence.


Principle 2 - With Great Power, Comes Great Responsibility

Article content
Think of using AI like driving a car on busy roads. You wouldn’t ignore the traffic laws, nor would you drive recklessly. Following the Highway Code ensures not only your safety but also that of others on the road.

Deploying AI isn’t just a technical project - it’s a compliance and ethics exercise. Principle 2: “You use AI lawfully, ethically and responsibly.” Any AI use must adhere to laws and regulations and uphold your organisation’s ethical standards. The AI Playbook urges teams to seek legal and data protection advice early. This is equally crucial for private companies: engage your legal, audit, compliance, and data privacy experts whenever you develop or adopt an AI system. Make sure you’re respecting data protection laws (like GDPR), intellectual property rights, and any industry-specific regulations.

Ethical AI use goes beyond legal minimums. It means proactively addressing issues like fairness, bias, transparency, and accountability. AI systems can inadvertently discriminate or cause harm if not checked.  A few years ago, a major tech company had to scrap an AI recruiting tool after discovering it favoured male candidates. The culprit was historical data reflecting past hiring biases. A robust ethical review process might have caught this earlier.

As an internal auditor, you can advocate for bias testing and impact assessments on AI models and ensure management has plans to mitigate any unintended consequences. The AI Playbook warns that models trained on historical data may display biases or produce harmful outputs. To counter this, define clear ethical guidelines for AI projects. Ask tough questions: Is our AI’s outcome fair to all groups? Would we be comfortable explaining its decisions to a customer or regulator? Build diverse teams to oversee AI development and include ethics checkpoints from design through deployment.

Using AI responsibly isn’t just about avoiding negative press - it’s about treating customers, employees, and stakeholders fairly, and upholding your company’s values.

 

Principle 3 - Security Considerations for AI Systems

Article content
You wouldn't leave your car unlocked while unattended. You would be protecting it with robust locks, an alarm system, and park it in a secure location. These measures prevent theft and tampering to defend against external threats.

In a world of growing cyber threats, AI security has become a top concern. Principle 3: “You know how to use AI securely.” AI systems introduce unique attack surfaces and vulnerabilities. For example, adversaries might try to corrupt an AI model by feeding it bad data or prompt a generative AI system to leak sensitive info. The AI Playbook highlights that organisations must ensure AI is safe and resilient to cyber attacks and comply with established cybersecurity standards. Different types of AI face different risks - from data poisoning and adversarial inputs to prompt injections and AI-powered phishing.

This principle translates to incorporating AI risks into your cybersecurity and IT risk assessments. Verify that AI developers are building in security controls: input validation to prevent malicious data, robust authentication to protect AI tools, and monitoring to detect anomalies. Technical safeguards such as encryption, access controls, and code reviews are as important for AI applications as for any software application, especially given AI’s potential to automate actions. If your company uses third-party AI services, diligence is needed to ensure the vendor follows strong security practices too. 

One key task is to educate colleagues that AI risk is also business risk. A compromised AI system could make incorrect decisions at scale or expose confidential data. Work closely with IT security teams to include AI scenarios in penetration testing and incident response plans. For instance, how would you respond if an AI model that handles financial transactions started behaving erratically due to a cyber attack? By planning for such scenarios, Internal Audit can help the organisation stay one step ahead of AI-driven threats.

 

Principle 4 - Keeping Humans in the Loop

Article content
Even the most sophisticated self-driving car still needs a skilled driver ready to take control during unexpected events. A vigilant human presence ensures that, when unforeseen challenges arise, the journey remains safe and on course.

No matter how advanced AI becomes, people must remain ultimately in charge. Principle 4: “You have meaningful human control at the right stages.” The AI Playbook insists on human oversight, especially for high-risk decisions influenced by AI. For the UK Government this means critical decisions like healthcare diagnosis or parole rulings should not be left to algorithms alone. For businesses, the same concept applies whenever AI is making or recommending impactful decisions - be it approving a loan, screening a job candidate, or flagging a fraud alert - there should be a human review or fall-back process.

Meaningful human control can take several forms. During AI development, humans should test the system end-to-end before it goes live, as the AI Playbook recommends. Once deployed, set up monitoring and periodic audits of the AI’s outputs. If an AI tool operates in real-time (like a chatbot responding to customers instantly), build in other checkpoints where humans can intervene - for example, a supervisor reviewing a sample of the chatbot’s conversations for quality and compliance.

A practical approach is to classify AI-driven decisions by risk level, as outlined for example in the EU AI Act. Low-risk, routine tasks (like auto-sorting emails) might be fine with minimal oversight. But higher-risk activities (like an AI flagging fraudulent transactions that could freeze a customer’s account) demand either pre-action human approval or prompt post-action review. Internal Audit can verify that these human-in-the-loop controls are defined and working.

Ask questions on audits: Are humans able to override or correct the AI when needed? Does the AI system make it easy to understand and explain its decisions? The goal is to avoid automation running unchecked. Humans and AI working together, each doing what they do best, leads to the safest outcomes.

 

Principle 5 - Managing the AI Lifecycle from Start to Finish

Article content
Imagine the life of a car - from its initial build in the factory, through regular servicing and MOT, to its eventual retirement. Every stage is carefully managed to maintain performance and safety, ensuring the vehicle remains reliable throughout its journey.

Another core theme is treating AI deployments with the same rigor as any product lifecycle. Principle 5: “You understand how to manage the full AI life cycle.” AI isn’t a set-and-forget endeavour; it requires ongoing care. The AI Playbook outlines that an AI system’s lifecycle includes setup and development, daily maintenance, updates, and eventually decommissioning, just like other tech. Organisations need plans for every phase: from choosing or building the model, to testing and deployment, to monitoring performance in production, through to retirement or replacement.

AI lifecycle management means ensuring that governance and controls follow the AI from cradle to grave. During development or procurement, insist on proper due diligence: was the model trained on appropriate data? Were ethical and security reviews done (as discussed above)? Next, during day-to-day operation, confirm there are resources to support the AI, e.g. a team assigned to monitor outputs, handle exceptions, retrain the model as data evolves, and fix issues. The AI Playbook specifically warns about potential drift and bias over time, and urges robust testing and monitoring to catch problems.

Consider implementing an AI inventory: a catalogue of all AI/ML models in use, their purpose, the data they rely on, and who is responsible for them. This helps in tracking updates and accountability. When an AI model is updated (say a new version or retraining with fresh data), change management controls should kick in - much like updating critical software in production would require testing and approvals. And when an AI system reaches end-of-life or is no longer needed, have a plan to retire it securely, ensuring any sensitive data is properly handled.

Internal Audit can play a key role by auditing each stage of this lifecycle. For example, an audit of the “AI development process” might check if risk assessments and testing were done before launch. A model maintenance audit could examine if performance metrics are reviewed and if the model is being recalibrated to prevent drift. By aligning audits to the AI lifecycle, you assure that governance isn’t just a one-time effort but a continuous process.

 

Principle 6 - Choosing the Right AI Technology for the Job

Article content
You wouldn’t drive a Lamborghini to the shops to pick up a pint of milk. Selecting a car for a specific journey is crucial; a sports car may be ideal for speed, while a van is better for heavy loads.

With the all hype around AI, it’s easy to assume every problem needs an AI solution - but that’s not always true. Principle 6: “You use the right tool for the job” encourages being open to AI solutions where they add value but also cautions that sometimes a non-AI or simpler approach may work better. This principle prevents teams from using AI just for the sake of innovation when a basic software or process change could solve the problem with less risk.

Assurance professionals should similarly evaluate when AI is appropriate and proportionate. This is essentially an AI use-case risk assessment. Before green lighting an AI project, ask: Is this the right technology for our goal? What are the risks vs. benefits? For instance, if you need to generate regular financial reports, a well-controlled reporting tool might be safer and more transparent than an experimental AI that “writes” reports. On the other hand, if you’re sifting through thousands of transactions to detect anomalies, AI (like machine learning anomaly detection) could be the right tool to enhance efficiency and accuracy.

The key is to align the technology with the business need and the risk appetite. Encourage a mindset of solution-agnostic problem solving: define the problem and requirements first, then determine if AI is the best fit. This principle reminds internal auditors to challenge projects that seem like technology in search of a problem. It also reminds risk managers to include “alternative solutions considered” in any AI project proposal. A flashy AI tool that isn’t well-suited to the task can introduce more vulnerabilities than it fixes.

 

Principle 7 - Be Collaborative

Article content
Imagine a modern car equipped with vehicle-to-vehicle communication. Your car can exchange real-time data with nearby vehicles and traffic systems - alerting you to traffic, incidents and weather conditions, using this information to optimise your route.

Organisations should foster a culture of openness and collaboration. "Principle 7: You are open and collaborative" requires that teams actively share information, insights, and challenges related to AI projects across all functions and with external partners where appropriate. By breaking down silos and encouraging cross‐functional dialogue, organisations can better identify risks and opportunities that might otherwise go unnoticed. Open collaboration enables diverse expertise to inform decision‐making and contributes to building a robust AI governance framework.

For internal audit and risk professionals, this means that risk oversight must not be conducted in isolation. While maintaining independence is crucial, it should never be mistaken for working in silos. Audits should complement and not replace collaborative efforts. By actively participating in existing forums, such as an AI community or governance forums, or even setting up new ones, auditors can share insights, identify emerging risks more effectively, and develop standardised controls across different departments.

Collaboration not only enriches the understanding of potential risks but reinforces the strength of Internal Audit by ensuring that its independent perspective is informed by diverse viewpoints and collective expertise. Additionally, this creates a perfect opportunity for assurance professionals to stay informed of the “Art of the Possible” and identify use cases they can leverage in future audits.

 

Principle 8 - You Work with Commercial Colleagues from the Start

Article content
Whether you're buying your first car or considering an upgrade, you’ll do your research first. Rather than rushing into a purchase, you first ask friends and family for their honest opinions, then check online reviews from trusted sources, and finally get your uncle, the family mechanic, to inspect it.

For government agencies, Principle 8 - "You work with commercial colleagues from the start" serves as a strategic directive. In a business context, this means engaging the people responsible for procurement, contracts, vendor management and budgeting early in the process. Many AI solutions involve third-party products or services, so early input from commercial and legal teams helps to set the right requirements and avoid costly issues later. By working together at the outset, the organisation can ensure that vendor selection, licensing, and contract terms for an AI system align with its objectives and risk tolerances.

For internal audit and risk teams, this principle is a reminder to verify that AI initiatives have had the right business and procurement involvement from their inception. Internal auditors should check whether the project engaged procurement and legal experts when scoping and sourcing the AI solution, especially if outside vendors or cloud services are in play. This includes reviewing if due diligence was performed on suppliers and if contracts include necessary protections - such as clauses on data protection, service levels, and liability for errors or bias.

 

Principle 9 - Building the Skills and Expertise to Implement AI

Article content
High-performance cars demand skilled operators - you wouldn’t hand the keys of a Formula 1 car to someone who has only just passed their driving test.

For successful AI adoption, organisations must ensure they possess the requisite technical and managerial capabilities. Principle 9: “You have the skills and expertise needed to implement and use AI solutions” underlines the importance of having the necessary skills and expertise to implement and use AI effectively. This principle is all about people. Organisations must have competent, trained personnel to develop, deploy and oversee AI systems. That could mean hiring data scientists and AI specialists, upskilling existing staff, or bringing in external consultants where appropriate. The goal is to ensure those working with AI truly understand the technology - its capabilities, its limitations, and how to manage it day-to-day. With knowledgeable people at the helm, AI tools are more likely to be used correctly and safely. Without them, even a sophisticated AI can become a liability or perform poorly.

Auditors should verify the presence of adequate AI talent is now a key part of project assurance and assess whether project teams have the right mix of expertise. For instance, checking that developers, data analysts and project managers involved in an AI initiative have suitable qualifications or training. If there are gaps, auditors will look for plans to fill them, such as training programmes or bringing in expert contractors. They might also confirm that staff who operate or interpret the AI system understand how it works and can recognise when it might be going wrong. Skill shortages should be treated as a risk factor. If the team lacks AI knowledge, the chances of errors, misuse or unforeseen problems increase.

 

Principle 10 - You Use These Principles Alongside Your Organisation’s Policies and Have the Right Assurance in Place

Article content
A responsible driver confirms that every document is in order - from a current MOT certificate to valid insurance - ensuring the car remains legally roadworthy.

The AI Playbook’s final principle - Principle 10: “You use these principles alongside your organisation’s policies and have the right assurance in place” - is about aligning AI efforts with your organisation’s existing policies and putting the right assurance measures in place. This means the ten principles should not stand alone as abstract ideals - they need to be embedded into the company’s own governance framework. Organisations are expected to integrate these AI principles into their internal policies, procedures and standards, ensuring consistency with how they manage other risks. In practice, that could involve updating corporate policies (for example, on data usage, model validation or ethics) to explicitly cover AI, and establishing oversight processes to enforce them.

Robust assurance mechanisms are also key. There should be regular reviews, audits or compliance checks to verify that AI systems continue to adhere to both the playbook’s principles and the organisation’s rules. AI governance becomes part of normal business governance, complete with accountability and verification. Internal Audit should confirm that organisational policies and frameworks have been adapted to include AI considerations, rather than treating AI as an exception outside normal controls. This might mean checking that there is an AI governance policy or committee in place, and that it aligns with these principles as well as industry regulations. AI-related risks should be listed and managed in the enterprise risk register, with clear oversight structures (such as an AI ethics board or a designated executive accountable for AI) providing regular scrutiny. By weaving AI into the fabric of the organisation’s policies and assurance activities, companies can manage AI with the same rigour and confidence as any other critical part of the business.

 

Why These Principles Matter To You

Article content

You might be thinking, “This all sounds sensible - but it’s aimed at government agencies. Why should I care about a government AI Playbook?” The answer: these principles are universal best practices for trustworthy AI. Whether you’re in a bank, an insurance firm, a retailer, or any industry now adopting AI, the challenges of bias, security, compliance, and oversight are the same. By internalising these ten principles, you gain a framework to assess and guide your company’s AI efforts.

For example, an Internal Audit department could use the AI Playbook’s principles as a checklist when reviewing a new AI-powered tool the business wants to implement. Does the team understand the model’s limitations, and have they tested its accuracy? Have legal/privacy risks been vetted? Are there security controls and human oversight in place? Is there a maintenance plan for the model’s lifecycle, and was AI chosen because it’s truly the best option? These questions map directly to Principles 1 through 10 and can uncover gaps before an AI project goes live.

Ultimate Takeaway: Internal audit and risk professionals should be consulted in AI initiatives. They must secure a permanent seat - and an influential voice - in AI governance steering committees or take the lead in establishing such forums if they don’t exist.

This post was updated on the 26th of March 2025 to refresh the images using OpenAI's latest 4o Image Generation model.

(All opinions expressed in this article are solely my own)

Alex Psarras - good work here! The AI ethics that Claude tries to differentiate with are one part of the jigsaw

Allister Richardson

Co-founder | Risk, Quant & Fintech | Headhunting & Research | Strategic Talent Insights | Recruitment Strategy & Advice

1mo

Thanks for sharing. Poorly governed AI will lead to a whole plethora of financial and non-financial risks. Embedding those principles is key. Easier said than done :)

Jamie Crossman-Smith

Making Innovation Simple | AI & ML, Data, and Internal Audit Expert | Visiting Enterprise Fellow, Manchester Metropolitan | Founder at Bloch.ai

1mo

I like this a lot Alex Psarras

To view or add a comment, sign in

More articles by Alex Psarras

Insights from the community

Others also viewed

Explore topics