The UK will host a conference in San Francisco this November, gathering AI developers and experts to discuss how to implement commitments made at the AI Seoul Summit earlier this year.
The conference, set for 21-22 November, will focus on AI safety and risk management in the lead-up to the AI Action Summit, which will be held in France in February 2025.
The San Francisco event is designed to encourage AI developers to share insights and strategies on developing robust AI safety frameworks. This follows agreements made at the AI Seoul Summit by 16 companies from around the world, including those from the US, EU, Republic of Korea, China, and the UAE. These companies committed to publishing their AI safety plans and addressing the most severe risks associated with AI, including potential misuse by malicious actors.
Focus on AI Safety and Risk Mitigation
A key focus of the conference will be the development and refinement of AI safety frameworks by the signatory companies. These frameworks are intended to provide guidelines on how to manage the risks posed by advanced AI technologies, particularly in cases where the technology could be exploited or where the risks cannot be sufficiently controlled.
The event will provide a platform for researchers, developers, and policymakers to collaborate on refining these frameworks. This will include discussions on how companies plan to tackle safety risks and prevent the deployment of models that pose significant threats.
Science, Innovation, and Technology Secretary Peter Kyle emphasised the importance of the event, stating, “The conference is a clear sign of the UK’s ambition to further the shared global mission to design practical and effective approaches to AI safety. We’re just months away from the AI Action Summit, and the discussions in San Francisco will give companies a clear focus on where and how they can bolster their AI safety plans, building on the commitments they made in Seoul.”
Collaboration Ahead of the AI Action Summit
The upcoming conference will be co-hosted by the UK’s AI Safety Institute (AISI) and the Centre for the Governance of AI. It will offer participants the opportunity to engage in workshops and targeted discussions focused on improving AI safety practices. Attendees will also be encouraged to share their thoughts on potential areas of discussion, such as developer safety plans, AI model safety evaluations, transparency, and the establishment of risk thresholds.
The UK’s AI Safety Institute, which was established in November 2023, is the world’s first state-backed organisation dedicated to AI safety. It has since played a leading role in the global network of AI Safety Institutes, collaborating with countries like the US to advance international cooperation on AI safety standards.
Building Global Collaboration on AI Safety
The conference will follow an earlier meeting of the International Network of AI Safety Institutes, hosted by the US on 20-21 November in San Francisco. This gathering will bring together technical experts from AI safety institutes around the world to align on priorities and foster greater global collaboration.
The US-UK collaboration has been a driving force behind the creation of a global network of AI safety bodies, with the UK’s AI Safety Institute serving as a model for other nations. The International Network aims to accelerate progress in AI safety by encouraging knowledge sharing and coordinating efforts to address the most pressing challenges posed by artificial intelligence.
The November events in San Francisco will mark a significant step forward in global efforts to manage the risks associated with AI and to ensure that developers are equipped to implement effective safety frameworks ahead of the AI Action Summit in 2025.
As AI continues to evolve, the discussions held at these meetings will be crucial in shaping the future of AI safety, helping to mitigate risks while fostering innovation in this rapidly advancing field. The UK remains at the forefront of these efforts, working with international partners to create a safer and more secure AI landscape.