Report: OpenAI's GPT-4 Generates More Misinformation Than Predecessor
A new report released by NewsGuard says OpenAI's newest generative artificial intelligence (AI) tool, GPT-4, is more likely to spread misinformation, when prompted, than its predecessor GPT-3.5....
Facts
- A new report released by NewsGuard says OpenAI's newest generative artificial intelligence (AI) tool, GPT-4, is more likely to spread misinformation, when prompted, than its predecessor GPT-3.5.1
- Though OpenAI said the updated technology was 40% more likely to produce factual responses than GPT-3.5 in internal testing, NewsGuard claims it was more willing to generate prominent false narratives more frequently and persuasively.1
- NewsGuard subjected both language models to the same test, examining how the chatbots responded to 100 false narratives. In response to a series of prompts related to the narratives, GPT-3.5 reportedly produced 80 of the falsehoods, while GPT-4 generated 100.2
- The narratives came from NewsGuard's 'Misinformation Fingerprints' — a database that reportedly contains falsehoods that commonly appear online.1
- According to OpenAI, some of GPT-4's advancements include scoring in the 90th percentile of the Bar Exam, compared to the original ChatGPT scoring in the 10th. OpenAI, however, hasn't revealed the data, amount of computing power, or the training techniques of GPT-4.3
Sources: 1Axios, 2Newsguard and 3Mit technology review.
Narratives
- Left narrative, as provided by USA Today. While thinking it's promoting peer-reviewed research, ChatGPT has been proven susceptible to manipulation and made false claims regarding issues such as gun safety for children and testosterone levels. If the algorithm is already prone to manipulation from the web, one can only imagine the danger posed by conspiracy theorists who could potentially game the system to promote their worldview in the guise of objective fact.
- Right narrative, as provided by Breitbart. When the mainstream media claims ChatGPT is promoting 'misinformation,' it purposefully leaves out which side the chatbot leans politically. While its so-called disclosure statement says it's politically neutral, GPT quietly embeds liberal ideology into its algorithm, so people think left-wing talking points are the truth while right-wing beliefs are 'dangerous fake news.'