US, UK, 16 Other Nations Ink Guidelines to Make AI 'Secure by Design'

Facts

  • 18 countries have signed a non-binding agreement advocating for artificial intelligence (AI) companies to create systems that are 'secure by design' — with the intention to prevent misuse causing harm to public safety.1
  • The document was officially launched at the UK's National Cyber Security Centre, with the event including panelists from the Alan Turing Institute, Microsoft, and other cybersecurity agencies. The guidelines argue that potential AI threats must be considered 'holistically' with cybersecurity.2
  • The creation of the 'Guidelines for Secure AI System Development,' led by the NCSC as well as the US’ Cybersecurity and Infrastructure Security Agency, has been described by US Homeland Security Secretary Alejandro Mayorkas as 'provid[ing] a common sense path' to ensuring AI safety with cybersecurity as the main focus.3
  • The agreement advises 'to raise the cyber security levels' within AI in order for the technology to be 'designed, developed, and deployed securely.” Signatory countries outside of the US and UK include Australia, Canada, Germany, France, Italy, Japan, Norway, South Korea, the Czech Republic, Estonia, Poland, Chile, Israel, Nigeria and Singapore.4
  • The 20-page document includes recommendations such as monitoring AI systems for cases of abuse, as well as protecting personal data from hackers via only releasing AI models after sufficient security tests have been completed.5
  • Lindy Cameron, chief executive of the UK's National Cyber Security Centre stated that the new guidelines were a 'significant step' in creating a 'truly global common understanding' of AI risk, ensuring security was not a 'postscript to development' but rather a 'core requirement.'6

Sources: 1Reuters, 2Verdict, 3Forbes, 4Silicon UK, 5Independent and 6Tech Monitor.

Narratives

  • Pro-establishment narrative, as provided by IBC. Governments and the international community have finally woken up to the dangers of AI, and have responded by finally setting the wheels in motion to implement meaningful legislation. While many may be worried that such action has taken place too late, it's imperative that the world takes steps forward before there is a chance for AI malpractice to unethically influence a series of potentially globe-changing political events in 2024. As the US, UK, Germany, and others move towards general elections we must ensure that AI is safe, and can only be used for good.
  • Establishment-critical narrative, as provided by Just Security. Current AI regulation proposals contain a host of problems that must be addressed. So far, there lacks a consensus as to what should be designated a 'high' security risk, while the lack of binding legislation involving enforcement mechanisms means that great trust is placed in company transparency. While different continents continue to diverge in how they approach AI risk with no single regulator, the question of legislative limits to AI remains an unsolved global problem of extreme concern.

Predictions