Action on AI beyond the AI Action Summit
Last week, I led Microsoft’s delegation to the AI Action Summit in Paris. The Summit had a much broader focus than prior summits, covering the AI opportunity agenda as well as the future of work, culture and innovation, and the governance of AI. It was well attended by governments, civil society, industry, and academia, and we were pleased to engage with colleagues from every continent and across many different sectors.
In the spirit of the Summit’s commitment to action, rather than review the week’s activities, I want to focus on what needs to be done next. In my view, summits are defined as much by the momentum that they generate going forward as they are by the conversations that take place on the summit days. Here are three areas that warrant focus and attention in the weeks and months ahead:
1. Deepening scientific understanding and marshaling resources
Just ahead of the Summit, an independent group of experts published the world’s first scientific report on the capabilities and risks of advanced AI systems, following the interim report release in May 2024. The International AI Safety Report deserves close review by all stakeholders who wish to harness the power of advanced AI. It should compel us to invest further in research and development to close identified evidence gaps and develop more effective risk mitigations. We must also develop a tight feedback loop between the synthesis of research that the International AI Safety Report represents, the applied learnings that will come from the efforts of the International Network of AI Safety Institutes, and the work of leading AI labs applying their own internal governance frameworks, such as Microsoft’s Frontier Governance Framework. Finally, we must build on the foundational investment that the UK Government, Report Chair Professor Yoshua Bengio, and the esteemed scientific contributors have made to the International AI Safety Report. We continue to support the UN as it advances its process for standing up the International Scientific Panel on AI in accordance with the Global Digital Compact.
Recommended by LinkedIn
2. Working toward streamlined reporting across borders
Over the past couple of years, we’ve seen the rapid development of global governance norms in the realm of AI. Yet a shared understanding of the practices by which we implement those norms has not kept pace, leaving stakeholders with critical questions as to what best practice looks like. The Hiroshima AI Process (HAIP) Reporting Framework, released ahead of the AI Action Summit, provides the world’s first global framework for companies to voluntarily report on their efforts to promote safe, secure, and trustworthy AI by reference to the 11 actions of the HAIP Code of Conduct. Leading AI companies, including Anthropic, Google DeepMind, Microsoft, and OpenAI, have all signed up to provide inaugural reporting, which is a great step forward for building a common and centralized body of evidence for the community to draw down on. It also represents a positive step toward streamlined expectations for reporting across borders, one that we are proud to be part of.
3. Supporting open-source tools for AI
Our longstanding efforts within Microsoft have taught us that tooling is essential to putting good governance into practice. I was delighted that Microsoft and GitHub joined other leading tech companies and philanthropists in providing our support to Robust Open Online Safety Tools (ROOST), a nonprofit launched at the AI Action Summit. ROOST will offer free, open-source tools to support organizations of all sizes with the building blocks they need to embed good governance practices by design. The startups I engaged with in Paris were excited to learn about ROOST and the aligned trust and safety infrastructure workstream of the Current AI partnership. Because these and other ecosystem efforts need to evolve alongside advances in technology and societal expectations, we must all lean into their success by contributing knowledge, expertise, and resources and making sure that tools get into the hands of innovators globally. One of the ways that we at Microsoft intend to do so is by working with our partners at G42 and MBZUAI to establish the Responsible AI Future Foundation, a center of excellence for good AI governance with a special focus on the unique needs of the Middle East and the Global South. This effort is designed to accrue to the shared goals of the Current AI and ROOST initiatives.
Continuing the momentum on these three lines of work will be key to the legacy of the AI Action Summit. At Microsoft, we look forward to doing our part to enable AI innovation and adoption through good governance.
Turning Words into Influence & Impact Every brand has a story. Every Entrepreneur, Visionary Thinker has a message. But not everyone knows how to put it into words that matter. That’s where I come in!
1moAmazing
KAM (Global) - Business Assurance #AI#ISO#Automotive - Management System Certification & Training Services (M.Sc.)
1moMicrosoft aims to become carbon negative by 2030, but the company's development is currently going in the wrong direction as it has instead increased its emissions by 29% since 2020. This is mainly due to the rapid expansion of AI and cloud infrastructure. - Will Microsoft meet its ambitious climate goals?
Responsible AI Executive | Advisor to Governments & Industry | Speaker | Aspiring Founder | Ethics-Driven AI Strategy & Governance
1moGreat to finally meet you in person Natasha Crampton. So sorry I have missed your session with Joanna Bryson ( President Macron was a bit late) is there a recording of the session?
Please follow, I am woefully behind on connection requests. UX & RAI qualitative researcher. Cat lady.
1moThe Safety Report is a good read. But they don’t consider how good UX can mitigate AI reliability issues. We could be ahead of the game here.
Thank you Natasha Crampton