OpenAI Says it Removed Russian, Chinese Disinformation Campaigns

0:00
/1861

Facts

  • In a report released on Thursday, OpenAI said it had taken down Russian, Chinese, Iranian, and Israeli influence campaigns that allegedly used its artificial intelligence (AI) tools to manipulate public opinion.1
  • The report claimed OpenAI's researchers banned accounts linked to five covert disinformation operations using its generative AI models to spread multilingual propaganda on social media platforms, adding none of them gained traction.2
  • Ben Nimmo, an OpenAI investigator, said online deception campaigns often used OpenAI's technology to share political content, adding that they didn't have an impact and 'still struggle to build an audience.'3
  • OpenAI said the deceptive content focused on multiple issues, including the Russia-Ukraine war, the Israel-Hamas conflict, and Chinese dissidents and foreign governments critical of China's government.4
  • Meta also reported nixing a covert influence operation that it claimed likely used AI-generated comments praising Israel's Gaza actions on US news platforms' and lawmakers' posts on Facebook and Instagram.5
  • AI firms like OpenAI and Google are developing deepfake-detector technology, but its effectiveness remains unproven.6

Sources: 1NPR Online News, 2Guardian, 3New York Times, 4ft.com, 5Digital Watch Observatory and 6Washington Post.

Narratives

  • Narrative A, as provided by The Equation. AI enables rapid, large-scale dissemination of false content, undermining trust in systems like democracy. Despite some state actions and federal efforts, there are no comprehensive laws to counteract these threats. Policymakers must enforce regulations to label AI-generated content, protect voters, and ensure public involvement in AI policy decisions to safeguard democracy from these emerging risks.
  • Narrative B, as provided by ft.com. The world must avoid being overly restrictive in formulating AI regulations, as this could stifle innovation. A balanced, dynamic approach to assessing AI risks is key, exploring new high-risk tech as needed. AI's potential for public opinion manipulation and copyright violations must be curbed, no doubt. However, nations must surely tap its advantages and maximize its benefits for their people.

Predictions