How do you generate value from AI, responsibly? Thoughts from AI UK's 2025 conference
I recently had the privilege of attending AI UK, an annual conference hosted by The Alan Turing Institute that gathers thinkers and tinkerers across industry, academia and government to explore the current state, and future vision, of AI in different contexts.
The conversations were broad - from how to plug the growing AI skills gap across UK regions, to AI's wide-ranging applications in healthcare and defence, to how more balanced human-AI interaction might bring about more aligned outcomes that protect against an existential catastrophe(!)
I came with the intention to learn from AI thinkers and tinkerers across the UK, to see what lessons around responsible AI implementation we might take back to our clients.
A lot was covered (you can view the program here, and session recordings will eventually be on YouTube), and I'm still digesting it all.
Here are some of my thoughts.
Striking the right balance
As AI continues to grow in capability (and hype), so does its disruptive potential to organisations across public and private space. Disruption brings change, and change brings discomfort. How we choose to respond to the change AI brings, to paraphrase comments from Jean Innes ' opening talk, "will determine our course of history for the next 20-30 years".
How do we want to respond to AI? How do we, as a humanity, want our society, our economy, our daily lives, to be?
This is the central question I saw as a theme throughout AI UK. The discussion of AI's impact upon people's lives attracts plenty of reluctant neo-luddite detractors, afraid of the power of technology and its unintended consequences. It also attracts many zealous techno-optimists, who believe AI will solve all our problems and allow us to live a utopian dream, living lives of leisure and abundance.
I think the right approach, as with most things, lies somewhere in the middle.
The reality is that AI is here to stay, and so we must engage with it. As one of my mentors taught me, "we cannot drown the ocean, so we must learn how to swim." But we must do so responsibly to avoid engineering our own downfall. AI can only serve to benefit us if we are willing to look at the entire system within which it operates, and to think through how we implement it in a given context. This is the general premise, and my own loose definition, of Responsible AI.
The UK's approach to Responsible AI
The subject of 'Responsible AI' (or, Safe/Trustworthy/Ethical AI, depending on who you ask) was a more vocal subject in 2024's AIUK conference, was less obvious this year. One of the reasons for this is that it is more an assumed reality embedded into applied contexts, rather than a topic that needs explicit mention.
The main industries explored at the conference (in line with the Turing Institute's grand challenges) were defence & national security, healthcare, and climate sustainability - all areas where ethics and responsibility, naturally, play a key role. A parallel conference held at the same time, the Global AI Standards Summit, is another clear indicator of the growing interest in ensuring that AI is deployed in line with trustworthy benchmarks.
The geopolitical situation is complex. Whilst the keys of AI are still wielded by American technocrats (for now), the UK has an ambition to be a significant player in the development of AI, harmonising its innovative growth ambition with its thoughtful, responsible values embedded in a historically strong regulatory framework (e.g., GDPR). This is well exemplified in the recent AI Opportunities Act (although it leads more towards techno-optimism).
Overall, I left the conference feeling hopeful that UK is well-placed to lead the way with Responsible AI.
The business case for responsibility
In (most) of the talks and in my conversations, there was a mature middle-ground: we have to get AI 'right', which means doing it responsibly. There are intelligent people in many industries who recognise that AI is here to stay, and so are thinking through how they can use AI to unlock value whilst minimising harm to people and planet.
This is not just utopian, hippy sentiment. Our experience at Slalom shows that businesses that think through their AI solutions more holistically are able to build beyond a proof-of-concept (which <10% of organisations manage to do), and deploy to production faster. Organisations with AI ethics embedded into their organisation are 27% more likely to outperform others in revenue growth. Businesses that invest in Responsible AI up-front are 3x more likely to realise a business benefit from their investment. Our present world is one where the consumer is more mistrusting of technology, 82% of consumers prefer brands that reflect their ethical values, and high-trust companies outperform low-trust ones by ~3x.
The reality is that deploying AI responsibly is not just the right thing to do, it's the logical thing. Being responsible makes business sense.
Recommended by LinkedIn
What does this mean for your business?
So, if you're in an organisation looking to unlock value from your data with AI, what are some tangible take-aways?
Here are three that might resonate.
First, know your why. Why does your business do what it does? How might it do that better? What might AI enable you do to better on that journey? Too many organisations start with the last question first, and so waste time developing tooling to solve a problem that doesn't exist. Ensure your AI strategy is intimately tied to your business outcomes to ensure you get from AI to ROI. (Many of us at slalom are big fans of Simon Sinek, whose famous model, 'Start with Why', explains this well.)
Second, know your context. AI's capability to positively transform your business is more likely to become a reality if it is done thoughtfully, with deep knowledge of the applied context. Using AI as a sledgehammer because you've jumped on the hype-train is a waste of your investment and time. This is one of the key reasons why many AI solutions don't move past a proof-of-concept. Take the time to step back, understand the lay of your land, and develop a thoughtful strategy.
To know where you are going, you must know where you are.
Third, know your people. Your high-level AI strategy will only take shape if the people within your organisation are willing and able to get on board with you. To make them willing, you need to meet them where they are, understand their world, and tell a compelling story to convince them to follow you. To make them able, you need to equip them with the tools, training and time to explore. This is where the top-down meets the grassroots. Fail to do this, and your strategy won't move beyond your fancy slide deck. My lovely colleague Tim Bass recently wrote a great article related to this.
The organisations we see getting the most value from their Data & AI investments are those that think about the big picture from the off-set. They align their top-level AI strategy with bottom-line business value. They take the time to engage and support executive leaders and front-line workers across the spectrum. They have a clear mapping from their corporate values, to their operating model, to their policies and standards, to their lines of code.
This kind of alignment takes time and thought. In an arena of hype, speed and 'progress', it can be easy to get caught up in the noise and not prioritise such an up-front investment. But organisations that have the courage to step back, take a breath, and ask why before ploughing on, are more likely to develop more sophisticated and robust AI solutions that deliver sustainable business value.
Bonus: things at the conference that caught my eye
Too many to list. Here are my top 5:
- Innovate UK has an interesting program, Bridge AI, focused on driving the adoption of responsible AI by 'bridging the gap' between innovation and implementation in high-growth industries.
- PRISM - The Partnership for Research Into Sentient Machines, a new organisation founded to think through how we understand 'conscious' machine intelligence - if such a thing can exist. I like their Open Letter.
- Prolific, a company that provides human data for AI research, gave an interesting talk on how diverse, representative human involvement with AI systems creates fairer, more aligned outcomes. The value of their work is evidenced by the PRISM alignment project.
- The AI Standards Hub held a parallel conference during AI UK. They have a great database of over 300 AI standards that's worth exploring.
- Responsible Ai UK, a non-profit research organisation in the UK, are doing lots of interesting things in the industry-academia crossover. I picked up a deck of cards on 'Responsible Innovation': facilitate client workshops about how we innovate, ethically. Reach out to Virginia Portillo, who designed them, for more details.
In the end, it's about being human
One of my most memorable AI UK moments was when a senior national security official was asked about their favourite James Bond-esque "gadget". They responded (paraphrased):
"The human mind. The ability to think deeply and apply their knowledge in multi-modal context, whilst being guided by the heart."
Having been privileged enough to work with many intelligent people over the years, I could not agree more.
The work that unlocks real value from AI is essentially creative, empathic, intuitive, strategic. It's work that is, fundamentally, human. As AI becomes more 'intelligent', the need for such skills will only increase.
In the end, it's about being human. I don't see machines replacing this.
Slalom is a fiercely human, modern technology consultancy. We thoughtfully partner with our clients to journey towards better tomorrows, together.
If you'd like to know more about how we can support you on your AI transformation journey, get in touch.
Data Privacy Consultant at Slalom
1moSakib, what's your take on the AI Playbook for the UK Government? It's the first govt publication on AI I've seen that centers ethical principles/safety and responsibility. A good model of the 'golden mean' between optimism and caution?
Data Engineering Manager | Data Product Owner | Data Delivery Lead
1moThis is really interesting read Sakib ! Thanks for sharing
Client Partner, Managing Director for Life Sciences & Professional Services UK
1moGreat article Sakib, thanks for sharing. I loved the point about how deploying AI responsibly is more than the right thing to do, it's also the logical thing. Being responsible makes business sense. Do the Right Thing, Always.
North Data Protection Lead
1moGreat article Sakib Moghal! It is a really interesting space where we have come to a crossroads between needing as much data as possible to get the most out of AI models, and balancing data minimisation principles from a privacy perspective! I think, like you say, responsible AI is a fundamental part of this balancing act!
Associate Consultant at Slalom
1moAwesome read Sakib, very well-written and insightful😊