Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Study: Explicit Photos of Children Being Used to Train AI

A study from the Stanford Internet Observatory (SIO) has found over 3.2K images of suspected child sexual abuse in the artificial intelligence (AI) database LAION, which is used to train the leading AI image generator, Stable Diffusion. The photos have reportedly been used to create realistic ima...

Improve the News Foundation profile image
by Improve the News Foundation
Study: Explicit Photos of Children Being Used to Train AI
Image credit: Unsplash
audio-thumbnail
0:00
/1861

Facts

  • A study from the Stanford Internet Observatory (SIO) has found over 3.2K images of suspected child sexual abuse in the artificial intelligence (AI) database LAION, which is used to train the leading AI image generator, Stable Diffusion. The photos have reportedly been used to create realistic images of fake children and even transform pictures of fully clothed real teens into nude photos.1
  • The study's findings counter previous claims that AI software creates child sex abuse imagery only by merging adult pornography with photos of real children. According to the study's authors, 'having possession of a LAION‐5B data set populated even in late 2023 implies the possession of thousands of illegal images.'2
  • Stable Diffusion said it 'has a zero-tolerance policy for illegal content' and has 'taken down the LAION datasets to ensure they are safe before republishing them.'3
  • While LAION said it temporarily removed its datasets and Stability AI — the maker of Stable Diffusion — said it's enforcing stricter filters, the report claimed an older version of Diffusion, called 1.5, is still 'the most popular model for generating explicit imagery.' The report suggested 1.5 models 'should be deprecated and distribution ceased where feasible' if not safety-checked.2
  • Meanwhile, OpenAI, which makes DALL-E and ChatGPT, says it doesn't use LAION and blocks requests for sexual content involving children. Google was using LAION to create an image generator before it discovered 'pornography,' 'racist slurs,' and 'harmful social stereotypes.'4
  • This follows reports from Spain earlier this year that girls as young as 11 were having their real faces used to create deepfake nude images. This has brought up legal questions surrounding when and how to legally prohibit the use of faces, even if the nudity is generated by AI.5

Sources: 1Stanford digital repository, 2Fast company, 3PBS NewsHour, 4Associated Press and 5Euronews.

Narratives

  • Narrative A, as provided by Cointelegraph. While there is a long way to go, AI companies, including Stability AI, are working with governments and law enforcement across the world to rid the internet of harmful child abuse imagery. As AI grows in use and capability, the companies behind the technology have the tools and the ambition to keep their products safe while also offering their positive qualities for the public to use appropriately.
  • Narrative B, as provided by Vice. While private tech giants like Meta, OpenAI, and Google claim they've steered clear of Stability AI and its child abuse-plagued datasets, the fact is that LAION is an open-source software that allowed the public to catch flaws in its system. If the amount of child abuse found in LAION disturbs you, just imagine what's behind the closed-door datasets of these private companies.
  • Narrative C, as provided by Bnn. This is just one example of the danger posed by the current AI race. AI products like the LAION dataset are being rushed to market in an attempt to fend off the competition, and the result in this case is that an internet-wide scrape of images was open-sourced without due diligence. This is just the tip of the iceberg unless more is done to regulate AI and cool down the race.

Predictions

Improve the News Foundation profile image
by Improve the News Foundation

Get our free daily newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More