Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks
Google Search Tweaks AI Overviews After Misleading Responses
Image credit: Smith Collection/Gado/Contributor/Archive Photos via Getty Images

Google Search Tweaks AI Overviews After Misleading Responses

Liz Reid, the head of Google's search team, Thursday confirmed in a blog post that the company would be improving its new AI Overviews feature after it gave misleading answers to some users....

Improve the News Foundation profile image
by Improve the News Foundation

Facts

  • Liz Reid, the head of Google's search team, Thursday confirmed in a blog post that the company would be improving its new AI Overviews feature after it gave misleading answers to some users.1
  • The AI Overviews feature, which was rolled out to US users this month, places an AI-generated answer at the top of search results. Screenshots of the feature telling users to add glue to pizza and eat one rock a day were shared on social media.2
  • In one instance, AI Overviews falsely claimed that former US Pres. Barack Obama is Muslim while citing an academic source that did not make that claim. Google has said that provided answers are true the 'majority' of the time.2
  • In her post, Reid wrote that AI Overviews is 'highly effective' and is as accurate as Google's snippets feature. She added that many screenshots circulating have been fake, and that only one in 7M searches produced a harmful result.3
  • According to Reid, search queries that were unique and without obvious answers produced strange results due to a 'data void,' with AI Overviews often turning to satirical or sarcastic sources to provide an answer.3
  • When improved, AI Overviews won't be shown in sensitive search topics, including health, and will detect 'nonsensical' queries better. The software will also rely less on user-generated content when presenting answers.4

Sources: 1Washington Post, 2Euronews, 3Google and 4Wired.

Narratives

  • Narrative A, as provided by MIT Technology Review. The technology behind AI Overviews has intrinsic flaws that make it unsuitable for wide release. While the technology is able to produce fluent language, it's less effective at producing accurate information. As long as large-language models use probability to produce their text, they'll always run the risk of being misinformation machines.
  • Narrative B, as provided by Google. Sadly, Google was the victim, not the originator, of misinformation in this case. Satirical and fake screenshots claiming that AI Overviews endorsed dangerous and offensive claims were shared widely, tarnishing Google's reputation. The internal testing has shown that only small tweaks are needed to keep it safe, effective, and accurate.

Predictions

Improve the News Foundation profile image
by Improve the News Foundation

Get our free daily newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More