Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
Watchdog: AI Being Used to Spread Child Sexual Abuse Material
Image credit: Bill Hinton/Contributor/Moment Mobile Moment Mobile ED via Getty Images

Watchdog: AI Being Used to Spread Child Sexual Abuse Material

The Internet Watch Foundation (IWF), a child safety watchdog, says pedophiles are reportedly using advances in AI to spread an increasing volume of child sexual abuse material (CSAM)....

Improve the News Foundation profile image
by Improve the News Foundation
audio-thumbnail
0:00
/1861

Facts

  • The Internet Watch Foundation (IWF), a child safety watchdog, says pedophiles are reportedly using advances in AI to spread an increasing volume of child sexual abuse material (CSAM).1
  • The UK-based group released an update Monday building on its initial investigative report from October 2023, when it analyzed more than 20K AI-generated images posted to a dark-web CSAM forum in a one-month period.2
  • IWF's update found that AI was used to create over 3.5K CSAM images and videos that were posted in a 30-day span on the same forum, a 17% increase from last fall. IWF also reported that the nature of the content has become more explicit and extreme.3
  • Since October, AI has been used to generate deepfake videos of CSAM, which takes real videos and uses AI to impose the faces of real-life abuse victims and famous children. There has also been a progression toward fully synthetic videos.4
  • IWF made multiple policy recommendations to regulators and tech companies around the world, who have been working to crack down on the proliferation of AI-generated CSAM. This year, the US Dept. of Justice has charged at least one man for using AI to generate CSAM.3
  • The report concludes that the rapid progression of AI has increased the severity of CSAM content, instead of limiting its proliferation and protecting children. It adds that AI is being used to 'nudify' non-explicit images of children, which are then posted on social media.4

Sources: 1Guardian, 2IWF (a), 3NBC and 4IWF (b).

Narratives

  • Narrative A, as provided by Fastcompany. The alarming rate at which it can generate CSAM is yet another reason to be concerned about AI. The efforts of government regulators and tech companies to stop the proliferation of this material have been insufficient so far. More must be done to stop this turbocharged crisis.
  • Narrative B, as provided by Forbes. When wielded with the right intentions, AI can be a great asset for child safety. With the power of AI, various websites and platforms can more easily identify material that may be inappropriate for minors and remove it immediately. In terms of CSAM, AI is being used to identify victims of child trafficking and abuse online and in real life. A nuanced perspective on AI shows it can also be used for good on a horrific issue like this.

Predictions

Improve the News Foundation profile image
by Improve the News Foundation

Get our free daily newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More