Report: OpenAI Data Stolen in 2023 Hack

0:00
/1861

Facts

  • A report from the New York Times alleges that hackers were able to access the internal messaging platform of OpenAI in 2023. According to anonymous sources, the maker of ChatGPT found that customer and employee data weren't compromised, so the news wasn't shared with the public or law enforcement.1
  • Sources say that the attackers were able to steal details relating to the design of their artificial intelligence (AI) technologies but were not able to access systems where their products are built or stored.2
  • OpenAI told employees that it did not believe foreign state actors were behind the breach. Leopold Aschenbrenner, a technical program manager at OpenAI, sent a memo to the board of directors, arguing that the company's defenses against espionage were lacking.3
  • AI products from companies such as OpenAI, Google, and Meta are released to the public with guardrails in place to prevent malicious use, such as producing disinformation. Co-founder of AI firm Anthropic, Daniela Amodei, says that if their models were stolen they could 'maybe' end up helping bad actors.1
  • In May, OpenAI said it disrupted five influence operations that were using its tools for 'deceptive activity.' Reuters reports that the Biden admin. has preliminary plans to mandate stronger national security-focused guardrails in advanced AI models.2

Sources: 1New York Times, 2Reuters and 3Fortune.

Narratives

  • Narrative A, as provided by wsj.com. AI companies are not taking the threat of espionage seriously enough. Particularly, foreign actors such as China have stepped up their efforts to steal American AI technology to use for their own purposes, but they may have heeded the call too late. The fact that this was not even reported to the authorities is a shocking reflection of how careless OpenAI can be in the face of a concerted effort to undermine the US.
  • Narrative B, as provided by Openai. Far from being lax on the security concerns AI raises, OpenAI has been proactive in the face of foreign governments trying to use their tools for malicious purposes. Disrupting an international influence network was only a piece of it, and it is wrong to stir up xenophobic paranoia over an issue that is already being addressed. It is unlikely that the exposure of model details will aid enemies of the US any time soon.

Predictions