Former OpenAI Safety Lead Joins Rival Startup
Jan Leike, a former lead safety researcher at OpenAI who resigned earlier this month, announced Tuesday that he would join Anthropic, a rival artificial intelligence (AI) startup....
Facts
- Jan Leike, a former lead safety researcher at OpenAI who resigned earlier this month, announced Tuesday that he would join Anthropic, a rival artificial intelligence (AI) startup.1
- Leike, who previously led OpenAI's superalignment team with the company's co-founder Ilya Sutskever, said he's eager 'to continue the superalignment mission' at Anthropic. Superalignment is research regarding long-term AI safety, ensuring that 'superintelligence' — a potential future aspect of AI that could be smarter than human beings — can be controlled and act safely in accordance with human values.[2
- Leike's announcement came on the same day OpenAI said it created a new safety and security committee to address 'safety and security decisions' for its projects.3
- OpenAI dissolved the superalignment team days after Leike and Sutskever announced their resignations, reportedly to integrate the concept across its research efforts.4
- Anthropic, founded by ex-OpenAI executives in 2021, has secured $4B in funding from Amazon.5
Sources: 1Bloomberg, 2Forbes, 3CNBC, 4Silicon Republic and 5Business Insider.
Narratives
- Narrative A, as provided by Business Insider. OpenAI focuses more on short-term commercial and societal success than long-term AI safety. That's why Leike and other high-profile researchers are moving from that company to different employment opportunities where they can create more safety-conscious AI models.
- Narrative B, as provided by Silicon Republic. OpenAI has developed a new safety and security team to address the challenges associated with the technology. With or without its founders or former executives, the company will continue to evaluate and develop AI's safeguards as it works on its next AI model.