OpenAI, Condé Nast Ink Multi-Year Content Deal
Sam Altman-led OpenAI on Tuesday announced a deal with Condé Nast to display content from its brands, such as The New Yorker and Vanity Fair, on ChatGPT and SearchGPT prototype.
Facts
- Sam Altman-led OpenAI on Tuesday announced a deal with Condé Nast to display content from its brands, such as The New Yorker and Vanity Fair, on ChatGPT and SearchGPT prototype.[1]
- While the deal's financial terms haven't been disclosed, OpenAI said the partnership aims to give its users "fast and timely answers with clear and relevant sources."[2]
- In an internal memo to employees, Condé Nast's Chief Executive Officer Roger Lynch said the move will allow the media organization "to protect and invest in our journalism and creative endeavors."[3]
- Previously, Lynch criticized artificial intelligence (AI) firms for using content without permission and termed that data "stolen goods." At a Senate hearing on AI's impact on journalism, he testified in favor of licensing.[4]
- Over the past few months, OpenAI has signed similar deals with digital media groups — including Time magazine, the Financial Times, News Corp, and the Associated Press.[5]
- This comes after The New York Times and the Intercept sued OpenAI over using their articles to train generative AI and large-language model systems.[6]
Sources: [1]OpenAI, [2]CNBC, [3]Condé Nast, [4]Wired, [5]TechCrunch and [6]The Guardian.
Narratives
- Narrative A, as provided by CIOL. The fusion of AI and journalism heralds a new era of media innovation. It could revolutionize content creation, distribution, and monetization. Combining cutting-edge technologies with rich, curated content could transform how we consume and interact with news and stories, involving unprecedented personalization, efficiency, and engagement. It could also address revenue generation and content discovery challenges and reshape the global media landscape.
- Narrative B, as provided by Purple Publish. The marriage of AI and journalism poses a potential threat to the integrity of news and information. AI-generated content lacks crucial human elements — context, ethics, and intuition. It can't grasp cultural nuances or make moral judgments, potentially leading to biased, inaccurate, or harmful reporting. AI's inability to understand complex human stories may result in shallow, misguided analyses. Moreover, if trained on biased data, AI could perpetuate and amplify societal prejudices.