Ten countries along with the European Union have agreed to establish the first international network of AI Safety Institutes, aiming to accelerate the advancement of AI safety science.
This agreement, formalised at the AI Seoul Summit, reflects a commitment to forge a common understanding of AI safety and align efforts on research, standards, and testing.
The “Seoul Statement of Intent toward International Cooperation on AI Safety Science” will unite publicly backed institutions, akin to the UK’s AI Safety Institute, to enhance complementarity and interoperability in AI safety practices. This initiative aims to promote the secure and trustworthy development of AI by sharing information about models, monitoring AI-related incidents, and advancing global understanding of AI safety science.
Leadership in AI Safety
The UK has played a pioneering role in AI safety, hosting the world’s first AI Safety Summit and establishing the first publicly backed AI Safety Institute. This leadership has spurred other nations, including the US, Japan, and Singapore, to establish their own AI Safety Institutes. The agreement to launch a global network of AI Safety Institutes builds upon this foundation, facilitating international progress in AI safety.
Prime Minister Rishi Sunak and Technology Secretary Michelle Donelan expressed enthusiasm for the establishment of the AI Safety Institutes network.
“AI presents immense opportunities to transform our economy and solve our greatest challenges – but I have always been clear that this full potential can only be unlocked if we are able to grip the risks posed by this rapidly evolving, complex technology,” said Donelan.
“Ever since we convened the world at Bletchley last year, the UK has spearheaded the global movement on AI safety and when I announced the world’s first AI Safety Institute, other nations followed this call to arms by establishing their own. Capitalising on this leadership, collaboration with our overseas counterparts through a global network will be fundamental to making sure innovation in AI can continue with safety, security and trust at its core.”
Collaborative Efforts for AI Innovation
The agreement underscores the interconnected goals of safety, innovation, and inclusivity in AI development. It advocates for embracing socio-cultural and linguistic diversity in AI models, emphasising the importance of inclusive AI innovation. Additionally, the “Frontier AI Safety Commitments” from leading AI technology companies further enhance cooperation by setting thresholds for managing AI risks, with input from governments and AI Safety Institutes.
The establishment of the international network of AI Safety Institutes marks a significant step towards promoting responsible AI development globally. By fostering cooperation and sharing expertise, the network aims to ensure that AI advances human wellbeing and addresses the world’s greatest challenges in a trustworthy and responsible manner.