Survey: 36% of Researchers Fear ‘Nuclear Level’ AI Catastrophe
In a survey conducted by Stanford University, 36% of researchers said they believe Artificial Intelligence (AI) could lead to a “nuclear-level catastrophe,” underscoring concerns in the sector about the risks posed by rapidly advancing technology....
0:00
/0:00
Facts
- In a survey conducted by Stanford University, 36% of researchers said they believe Artificial Intelligence (AI) could lead to a “nuclear-level catastrophe,” underscoring concerns in the sector about the risks posed by rapidly advancing technology.1
- Stanford’s 2023 Artificial Intelligence Index Report, which was conducted by researchers from three different universities, asked participants to agree or disagree with the statement, 'It is possible that decisions made by AI or machine learning systems could cause a catastrophe this century that is a least as bad as an all-out nuclear war.'2
- In addition, the report found that 73% of researchers in natural language processing — the branch of computer science concerned with developing AI — believe the technology might soon spark “revolutionary societal change.” Although an overwhelming majority of researchers believe the future net impact will be positive, concerns remain that AI will develop capabilities faster than humans can manage it.3
- According to the nonprofit AIAAIC ['AI, Algorithmic and Automation Incidents and Controversies'] database, controversial incidents involving AI have increased 26 times since 2012, including 2022 deep fake videos of Ukraine President Volodymyr Zelenskyy surrendering, and US prisons using call-monitoring technology on inmates.4
- AI is developing quickly, and the research is advancing from generative AI to creating Artificial General Intelligence (AGI), according to 57% of researchers surveyed. Artificial general intelligence is an AI system that can mimic or even outperform the brain's capabilities; there is little consensus over when and if AGI could happen.3
- Last month, SpaceX and Tesla CEO Elon Musk and Apple co-founder Steve Wozniak were among more than one thousand signatories of an open letter from the Future of Life Institute calling for a six-month pause on training AI systems beyond the level of Open AI’s chatbot GPT-4. The letter said, “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”5
Sources: 1Al Jazeera, 2FOX News, 3Fortune, 4Futurism and 5Future of Life Institute.
Narratives
- Narrative A, as provided by Futurism. Despite just 41% of researchers believing AI should be regulated, something obviously has to be done based on the rest of this study’s results. If one-third of researchers are warning that AI could lead to a major catastrophe, and nearly three-quarters believe AI could soon lead to revolutionary societal change, then it’s time to take a pause and figure out how to avoid the dangerous results AI could produce.
- Narrative B, as provided by Reuters. AI is the future, and pausing or trying to set back its development won't solve any problems. AI offers a revolutionary means to address some of the world's biggest challenges, including inequity and even climate change, and it must be kept on its current track. Rather than trying to reign it in, the tricky areas of the technology simply need to be identified, and work can be done to improve them while AI continues to develop at its current pace.