Tech CEOs, AI Experts Warn of Existential Risk
In a statement released by the Center for AI Safety on Tuesday, artificial intelligence (AI) experts and tech CEOs — including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and "AI Godfather" Geoff Hinton — have issued a new warning about the severe risks posed by AI to humanity.
Facts
- In a statement released by the Center for AI Safety on Tuesday, artificial intelligence (AI) experts and tech CEOs — including OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabis, and "AI Godfather" Geoff Hinton — have issued a new warning about the severe risks posed by AI to humanity, including extinction.1
- Signed by AI experts and public figures, the one-sentence Statement on AI Risk asserts, "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."2
- This follows an open letter in April signed by thousands of AI experts calling for a six-month pause in developing systems more powerful than OpenAI's GPT-4 — citing potential risks to society and humanity — as well as establishing a robust auditing and certification ecosystem.3
- Last week, Altman proposed setting up an international regulatory body for AI similar to the International Atomic Energy Agency (IAEA), noting it’s time to mitigate the risks of today’s technology and “start thinking about the governance of superintelligence.”4
- This comes after Altman called on lawmakers in his testimony before the Senate two weeks ago to protect against AI's worst consequences, warning if the technology trends in a negative direction, it could cause "significant harm to the world."5
- According to Stanford University’s Artificial Intelligence Index Report released last month, at least 36% of AI researchers agree that AI decisions could cause a catastrophe as severe as a nuclear war in this century.6
Sources: 1TechCrunch, 2Safe, 3Future of Life Institute, 4OpenAI, 5ABC News, and 6New Scientist.
Narratives
- Narrative A, as provided by Verge. These warnings should not be dismissed, as once AI systems reach a certain level of sophistication, it may become impossible to control their actions. By likening the threat posed by AI to nuclear war, these renowned AI experts want policymakers to focus on the technology's safety, which remains neglected. It is vital to pressure industry leaders and policymakers to establish guidelines and regulations for the responsible deployment of AI.
- Narrative B, as provided by Reuters. AI is the future, and trying to set back its development won't solve any problems. AI offers a revolutionary means to address some of the world's biggest challenges, including inequity and even climate change, and it must be kept on its current track. Rather than trying to reign it in, the tricky areas of the technology simply need to be identified, and work can be done to improve them while AI continues to develop at its current pace.
- Technoskeptic narrative, as provided by National Review. Artificial intelligence experts continue to overblow the technology's risk and engage in unrealistic fear-mongering. Although current technology is impressive, artificial general intelligence — the true concern — is still a long ways off, if attainable at all. By consistently releasing warnings about its far-fetched consequences, tech experts miss the mark and distract the public from debating existing or near-term harms and economic realities of AI.