FTC Investigating ChatGPT's OpenAI for Possible Consumer Harm
Facts
- The US Federal Trade Commission (FTC) has opened an investigation into OpenAI, probing whether the maker of ChatGPT has harmed consumers by putting reputations and data at risk.1
- In a letter sent to OpenAI, first reported by the Washington Post and verified by other major outlets, the FTC stated that this probe will focus on whether the company has "engaged in unfair or deceptive" practices related to data security or that resulted in harm to consumers.2
- The civil subpoena made public on Thursday asks OpenAI to detail steps it has taken to address or mitigate risks that its large language model (LLM) products could generate false, misleading, or disparaging statements about real individuals.3
- It further requests the company to list the third parties that have access to its models and explain both how they obtain information to train their LLMs and how they retain and use consumer information.4
- These demands pose the most major regulatory threat to date to OpenAI's business in the US as the FTC can levy fines, or even put a business under a consent decree [i.e. an agreement or settlement that resolves a dispute between two parties without admission of guilt] if the company is found to have violated consumer protection laws.5
- OpenAI has already come under regulatory pressure abroad, with ChatGPT being banned in Italy from March to April on claims that the company was unlawfully collecting personal data from users and failed to prevent minors from accessing illicit material.6
Sources: 1Forbes, 2FOX News, 3Wall Street Journal, 4NBC, 5Washington Post, and 6New York Times.
Narratives
- Narrative A, as provided by Fortune. Though large language models have been widely known for their imperfections and tendency to hallucinate, tech companies have decided that the appeal of such products beats the potential downsides of inaccuracy and misinformation. Given that this choice can harm users as bots such as ChatGPT often produce plausible — but incorrect — information, governments must step in and regulate these systems.
- Narrative B, as provided by Decrypt. OpenAI has already acknowledged that generative artificial intelligence can produce untrue content, transparently and responsibly warning users against blindly trusting ChatGPT and confirming the sources provided by the large language model. Meanwhile, its researchers are working to improve the technology's mathematical problem-solving and exploring the impact of process supervision.