Top AI researchers are sounding the alarm for immediate action on AI risks, emphasizing that progress has been insufficient since the inaugural AI Safety Summit at Bletchley Park six months ago. Despite initial pledges by global leaders to responsibly govern AI, the upcoming AI Safety Summit in Seoul (21-22 May) brings to light concerns from twenty-five leading AI scientists who assert that not enough has been done to mitigate the technology’s risks. They have detailed urgent policy actions in a consensus paper published in Science, urging global leaders to transition from broad promises to specific, actionable commitments. Professor Philip Torr from the University of Oxford, a co-author, stressed the necessity of moving from vague proposals to solid commitments, outlining crucial recommendations for both corporations and governments.
The authors stress the urgency for global leaders to acknowledge the potential development of highly advanced generalist AI systems that could surpass human capabilities in various critical domains within the next decade. Discussions at the governmental level about frontier AI and initial guidelines have been established, but these are deemed insufficient against the backdrop of potentially transformative AI progress. Moreover, current AI safety research is minimal, with only 1-3% of studies focusing on safety, and there is a lack of adequate mechanisms or institutions to prevent misuse or reckless use of AI, including autonomous systems capable of independent action.
An esteemed group of AI pioneers, including Geoffrey Hinton, Andrew Yao, Dawn Song, and the late Daniel Kahneman, has issued an urgent call to action. This group, representing a diverse set of regions including the US, China, EU, and the UK, and boasting accolades such as the Turing Award and Nobel Prize, has, for the first time, reached a consensus on global policy priorities for AI risk management. They recommend the establishment of fast-reacting, expert institutions for AI oversight with significantly larger budgets than current policies allow, pointing out the stark budget disparity between the US AI Safety Institute and the US Food and Drug Administration.
The consensus also advocates for rigorous, enforceable risk assessments and mandates AI companies to prioritize safety, demonstrating harmlessness through “safety cases” similar to those used in aviation and other safety-critical industries. These measures place the responsibility for proving safety squarely on the shoulders of AI developers. Moreover, they call for adaptive policies that respond dynamically to the pace of AI development—tightening regulations if AI capabilities increase rapidly and relaxing them if progress stalls.
The paper underscores the necessity for governments to lead the regulation of competent future AI systems. This includes licensing AI development, restricting AI autonomy in vital societal functions, stopping development if worrying capabilities emerge, mandating access controls, and enforcing robust security measures capable of withstanding state-level cyber threats until adequate protections are in place. Without stringent regulation, the unchecked advancement of AI could lead to catastrophic outcomes, including large-scale loss of life and significant environmental damage.
Stuart Russell OBE, a leading AI academic, emphasizes that the call for strict regulation by governments is not to stifle innovation but to ensure safety in the face of rapid AI development. He criticizes the lack of stringent rules on AI companies compared to other industries, pointing out the absurdity of AI firms facing fewer regulations than sandwich shops. This consensus among leading experts reflects a critical juncture in AI governance, highlighting the need for severe and immediate action to safeguard against the profound risks posed by advanced AI technologies.
More information: Yoshua Bengio et al, Managing extreme AI risks amid rapid progress, Science. DOI: 10.1126/science.adn0117
Journal information: Science Provided by University of Oxford