Google to Require Disclosure of AI-Generated Political Ads

0:00
/0:00

Facts

  • Beginning in November, roughly a year before the 2024 US presidential election, Google will require a prominent disclosure for any political ads on its services — such as YouTube — that include audio or images synthetically altered or created by artificial intelligence (AI).1
  • The rule, which will apply to 'all verified election advisors,' will demand a 'clear and conspicuous' notice 'in a location where it is likely to be noticed by users,' Google said in its announcement on Wednesday.2
  • Disclosures won't be required for AI edits such as cropping, resizing, or in-background edits that don't depict a realistic interpretation of real events. For the rest, however, ads without them will be blocked from running or removed later if initially able to evade detection.3
  • AI-generated ads have already hit the internet, including one depicting a fake former Pres. Donald Trump resisting arrest and another showing a fake version of his wife, Melania, yelling at police. The Ron DeSantis campaign posted a fake picture of Trump hugging Anthony Fauci this June.2
  • The Republican National Committee in April also issued an AI-generated montage of photos meant to represent the future of the US under a re-elected Pres. Joe Biden, which showed boarded-up storefronts, armored military patrols in the streets, and waves of immigrants creating panic.1
  • The new Google policy updates its current election ads rules in regions outside the US, including Europe, India, and Brazil. Facebook also bans both deepfakes and other manipulated media in videos that are not in advertisements, though it doesn't require disclosures.3

Sources: 1Abc news, 2New York Post and 3Politico.

Narratives

  • Narrative A, as provided by Northwestern now. Not only should tech companies be reviewing AI-generated images and videos, but the government should enact laws to prevent this insidious content from polarizing society even more. Until that happens, however, everyone must learn to carefully analyze the images they see online before spreading them onto the internet.
  • Narrative B, as provided by Institute for free speech. While Google isn't calling for an outright ban on deepfake images, requiring disclaimers is a slippery slope that could lead to forced labels on other content, such as satire and simpler forms of editing. Lying about what people said, and even doctoring images to fit a narrative, were around long before the advent of AI, so just because a new technology exists doesn't mean we should lose our right to use it how we please.