'AI Safety Clock' Launches
0:00
/1861
Facts
- Last month, the IMD Business School launched its AI Safety Clock with the goal of 'mak[ing] clear that the dangers of uncontrolled AGI [artificial general intelligence] are real and present,' according to its creator, IMD professor Michael Wade.[1]
- The clock, which currently sits at 29 minutes to midnight, claims to monitor three key factors — AI sophistication, autonomy, and physical integration. These are monitored via information from over 1K websites and nearly 3.5K news feeds.[2][3]
- IMD said the clock is 'inspired' by the 'Doomsday Clock,' which was first established by the Bulletin of the Atomic Scientists in 1947 to 'convey threats to humanity and the planet.' As of January 2024, the Doomsday Clock is set to 90 seconds to midnight.[4][5]
- Wade claims that while AI is currently 'largely under human control,' it is also 'already making gains,' including in autonomous military drones and social media bots. He's calling for a 'global approach to AI regulation.'[1]
- Earlier this year, the US, UK, and EU, among others, signed the first-ever legally binding international AI treaty. This followed other AI commitments since 2023, including the EU AI Act, the Bletchley Declaration, and a Group of Seven (G7) AI agreement.[6]
Sources: [1]Time, [2]Perplexity, [3]Goodmenproject, [4]Bulletin of the Atomic Scientists, [5]IMD launches AI Safety Clock and [6]Verity.
Narratives
- Narrative A, as provided by Medium. The AI Safety Clock is a symbolic attempt to highlight the urgent need for domestic and international actors alike to wake up and acknowledge the real risks the new technology poses to the world. Nations must act now to ensure AI is aligned with our goals so that we can reap the multitude of benefits and avoid disaster.
- Narrative B, as provided by Gizmodo. This initiative is yet another example of sensationalist AI doomerism. Unlike the Doomsday Clock, IMD's AI Safety Clock has little scientific merit. Such fearmongering over a hypothetical future AI catastrophe only serves to distract from real, current AI challenges like resource consumption, labor exploitation, and algorithmic bias.