The UK government has launched a new research initiative aimed at ensuring the safe development and deployment of artificial intelligence (AI) technologies, while driving economic growth and improving public services.
Announced on 15 October 2024, the programme provides grants to researchers working to protect society from the potential risks posed by AI, such as deepfakes, misinformation, and cyber-attacks.
This initiative is being led in partnership with the Engineering and Physical Sciences Research Council (EPSRC) and Innovate UK, both part of UK Research and Innovation (UKRI). It focuses on building resilience to AI-related threats and ensuring that the UK remains at the forefront of responsible AI development.
The programme seeks to strengthen public confidence in AI, which is seen as crucial to unlocking its potential for long-term economic growth. Ensuring trust in the technology is central to the government’s broader strategy of harnessing AI to increase productivity and modernise public services.
Boosting Public Confidence in AI
One of the main aims of the programme is to ensure AI systems are safe and trustworthy at the point of delivery, especially as AI becomes more integrated into everyday life. The UK government believes that boosting public confidence in AI is essential for unlocking its potential benefits.
The Secretary of State for Science, Innovation, and Technology, Peter Kyle, emphasised the importance of this initiative: “My focus is on speeding up the adoption of AI across the country so that we can kickstart growth and improve public services. Central to that plan though is boosting public trust in the innovations which are already delivering real change.”
Kyle highlighted how the grants programme will support research from industry and academia to ensure that AI systems are safe as they are rolled out across the economy. The research will focus on risks associated with AI technologies, such as deepfakes and unexpected system failures in critical sectors like finance, healthcare, and energy.
£4 Million in Initial Funding for AI Safety Projects
The programme, which is part of the UK’s AI Safety Institute, has launched its first phase of funding worth £4 million. Up to 20 projects will receive grants of up to £200,000 each. The fund, originally announced at the AI Seoul Summit in May 2024, has a total budget of £8.5 million, with more funding expected to be made available in future phases.
Applicants for the grants must submit proposals by 26 November 2024. The programme will evaluate projects based on their potential to address critical risks in AI deployment and their contribution to the safety of AI systems. Successful applicants will be confirmed by the end of January 2025, with the first round of grants to be awarded in February 2025.
Addressing AI Risks in Key Sectors
The AI Safety Institute’s focus is on systemic AI safety, particularly in the infrastructure and systems where AI is being deployed. The programme seeks to identify and mitigate the risks of using AI in critical sectors such as healthcare, energy, and finance. The goal is to transform research findings into long-term solutions that can be used to tackle AI risks effectively.
Ian Hogarth, Chair of the AI Safety Institute, highlighted the importance of addressing the societal risks of AI: “This grants programme allows us to advance broader understanding on the emerging topic of systemic AI safety. It will focus on identifying and mitigating risks associated with AI deployment in specific sectors which could impact society, whether that’s in areas like deepfakes or the potential for AI systems to fail unexpectedly.”
Hogarth noted that the initiative will bring together researchers from a range of disciplines to build a deeper understanding of AI risks and develop evidence-based approaches to AI safety for the global public good.
Collaborative Approach to AI Safety
The AI Safety Institute aims to foster collaboration between UK-based organisations and international partners. The programme is open to UK applicants, with the possibility of including global partners in research projects. This collaborative approach is intended to strengthen international cooperation in AI safety and ensure a shared global understanding of the risks posed by AI.
The initiative is part of the UK government’s broader plan to introduce targeted legislation for companies developing powerful AI models. Rather than imposing blanket regulations, the government aims to create proportionate regulations to ensure that AI development is both innovative and safe.
As AI becomes an integral part of critical sectors and public services, the research funded by this programme will play a key role in ensuring the technology’s safety and public trust. Through these efforts, the UK aims to lead the way in developing responsible AI that supports economic growth while safeguarding society from potential risks.