Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
OpenAI Says it Removed Russian, Chinese Disinformation Campaigns
Image credit: Unsplash

OpenAI Says it Removed Russian, Chinese Disinformation Campaigns

In a report released on Thursday, OpenAI said it had taken down Russian, Chinese, Iranian, and Israeli influence campaigns that allegedly used its artificial intelligence (AI) tools to manipulate public opinion....

Improve the News Foundation profile image
by Improve the News Foundation
audio-thumbnail
0:00
/1861

Facts

  • In a report released on Thursday, OpenAI said it had taken down Russian, Chinese, Iranian, and Israeli influence campaigns that allegedly used its artificial intelligence (AI) tools to manipulate public opinion.1
  • The report claimed OpenAI's researchers banned accounts linked to five covert disinformation operations using its generative AI models to spread multilingual propaganda on social media platforms, adding none of them gained traction.2
  • Ben Nimmo, an OpenAI investigator, said online deception campaigns often used OpenAI's technology to share political content, adding that they didn't have an impact and 'still struggle to build an audience.'3
  • OpenAI said the deceptive content focused on multiple issues, including the Russia-Ukraine war, the Israel-Hamas conflict, and Chinese dissidents and foreign governments critical of China's government.4
  • Meta also reported nixing a covert influence operation that it claimed likely used AI-generated comments praising Israel's Gaza actions on US news platforms' and lawmakers' posts on Facebook and Instagram.5
  • AI firms like OpenAI and Google are developing deepfake-detector technology, but its effectiveness remains unproven.6

Sources: 1NPR Online News, 2Guardian, 3New York Times, 4ft.com, 5Digital Watch Observatory and 6Washington Post.

Narratives

  • Narrative A, as provided by The Equation. AI enables rapid, large-scale dissemination of false content, undermining trust in systems like democracy. Despite some state actions and federal efforts, there are no comprehensive laws to counteract these threats. Policymakers must enforce regulations to label AI-generated content, protect voters, and ensure public involvement in AI policy decisions to safeguard democracy from these emerging risks.
  • Narrative B, as provided by ft.com. The world must avoid being overly restrictive in formulating AI regulations, as this could stifle innovation. A balanced, dynamic approach to assessing AI risks is key, exploring new high-risk tech as needed. AI's potential for public opinion manipulation and copyright violations must be curbed, no doubt. However, nations must surely tap its advantages and maximize its benefits for their people.

Predictions

Improve the News Foundation profile image
by Improve the News Foundation

Get our free daily newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More