UK AI Summit: Big Tech to Allow Government Vetting of AI Products
As part of the UK's first two-day Artificial Intelligence (AI) safety summit, Prime Minister Rishi Sunak announced Thursday that a number of technology companies had signed a voluntary document alongside 10 countries and the EU, allowing governments to safety-test next-generation AI models....
Facts
- As part of the UK's first two-day Artificial Intelligence (AI) safety summit, Prime Minister Rishi Sunak announced Thursday that a number of technology companies had signed a voluntary document alongside 10 countries and the EU, allowing governments to safety-test next-generation AI models.1
- Eight companies — Amazon, Google, Open AI, Meta, Microsoft, Inflection AI, Mistral AI, and Anthropic — signed the document, alongside the US, the UK, Canada, Australia, France, Germany, Italy, Japan, Korea, Singapore, and the EU.2
- According to Sunak, AI models will be tested by the UK's AI Safety Institute — a continuation of entrepreneur Ian Hogarth's Frontier AI Taskforce.3
- The UK's AI Safety Institute — chaired by Hogarth — is also set to work with the Alan Turing Institute, the US AI Safety Institute, and the government of Singapore.4
- Adherence to the deal is optional, with Sunak arguing that 'binding requirements,' while potentially necessary in the future, aren't currently needed — emphasizing a current priority of making sure tech firms are not 'marking their own homework.'5
- The agreement followed the world's first international declaration on AI, titled The Bletchley Declaration, signed by 28 countries — including the US and China — warning of the 'potential for serious' and 'catastrophic' consequences from the technology and agreeing to build 'respective risk-based policies.'6
Sources: 1BBC News, 2Politico, 3UKTN, 4BusinessCloud, 5Independent and 6Euronews.
Narratives
- Narrative A, as provided by Politico. Despite facing skepticism prior to the event, the UK's AI summit was a success. The ability in and of itself to simply mediate conversations between the likes of the US and China should be praised, let alone the signing of shared communications and declarations. The AI safety agreement is a landmark moment, allowing governments to ensure safety concerning the latest technology within the fast-growing sector.
- Narrative B, as provided by UKTN. The outcome of the UK's AI summit was nothing more than empty rhetoric, voluntary declarations, and self-promotion. Alongside a drastically small emphasis on the opportunity for good that AI's potential holds, the summit ignored the immediate urgency for international legislation. Despite likely plaudits, the UK missed an opportunity to really lead in the regulation of the AI sector.
- Narrative C, as provided by City AM. The UK's summit took a heavy approach to existential questions concerning doomsday AI predictions rather than the practical impact that AI may have on common workplaces and labor. The UK has demonstrated its strength as a leading force in the sector, however, such discourse between political, academic, and commercial elites is currently too narrow and leans too heavily upon the abstract threat of creating god-like intelligence.