Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
UN Secretary-General Voices Support for Global AI Watchdog
Image credit: Wikimedia Commons

UN Secretary-General Voices Support for Global AI Watchdog

On Monday, UN Secretary-General António Guterres stated he was 'favourable [sic] to the idea' that a global watchdog, similar to the International Atomic Energy Agency (IAEA), should be founded to monitor artificial intelligence (AI) development, a proposal put forward by AI industry leaders....

Improve the News Foundation profile image
by Improve the News Foundation
audio-thumbnail
0:00
/0:00

Facts

  • On Monday, UN Secretary-General António Guterres stated he was 'favourable [sic] to the idea' that a global watchdog, similar to the International Atomic Energy Agency (IAEA), should be founded to monitor artificial intelligence (AI) development, a proposal put forward by AI industry leaders.1
  • The remarks came while Guterres was speaking at the launch of a new disinformation policy at the UN, noting the potential risks AI poses to democracy and human rights.1
  • Noting that AI concerns 'are loudest from the developers who designed it,' Guterres also announced the formation of an advisory board on AI to offer recommendations on AI alignment with human rights, the rule of law, and the common good.2
  • This echoes sentiments from OpenAI CEO Sam Altman, who earlier in June stated that an international regulatory body was needed to mitigate the 'existential risk' AI poses. An agency such as the IAEA could only be created by member states, not unilaterally by the UN.3
  • Fearful of the potential abuses by despotic regimes and social media companies that value engagement 'before any other consideration,' Guterres hopes the forthcoming UN Code of Conduct for Information Integrity on Digital Platforms will establish basic principles for others to follow.4
  • Along with transparency from companies on their social media algorithms and government protections for dissent, the code of conduct will require 'safe, secure, responsible and ethical' uses of AI. Elsewhere, the EU is finalizing its landmark legislation on AI and the UK has planned an AI safety summit for this autumn.4

Sources: 1Al Jazeera, 2Reuters, 3Associated Press and 4Fortune.

Narratives

  • Narrative A, as provided by Foreign Policy. The IAEA is not the model AI luminaries should follow if they are serious about the risks of artificial intelligence. Multilateral cooperation is a slow process, and would not be able to respond effectively to technology moving at such a breakneck speed. Indeed, nuclear armament increased dramatically in the first decade of the IAEA's existence. The onus for AI safety is on the developers themselves, who cannot shrug off this burden onto others. AI developers must work with each other and with governments to protect humanity from AI risks.
  • Narrative B, as provided by CNN. While international organizations are far from perfect, they are our best chance to get ahead of the worst consequences of unchecked AI development. With such fierce global competition in the digital world, a patchwork, country-by-country model simply would not suffice. The risks of AI are comparable to nuclear war and infectious disease and could shake our world to its core. Countries around the world are treating this issue with the gravity it deserves as they get the ball rolling on international guidelines.

Predictions

Improve the News Foundation profile image
by Improve the News Foundation

Get our free daily newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More