US Mother Claims AI Chatbot Led to Son's Suicide
A US resident has sued artificial intelligence firm Character.AI and Google over her 14-year-old son's suicide that, she said, was encouraged by Character.AI chatbot Dany. Megan Garcia claimed her son Sewell Setzer had a virtual romantic relationship with Dany....
0:00
/1861
Facts
- A US resident has sued artificial intelligence firm Character.AI and Google over her 14-year-old son's suicide that, she said, was encouraged by Character.AI chatbot Dany. Megan Garcia claimed her son Sewell Setzer had a virtual romantic relationship with Dany.[1][2]
- Sewell took his own life on Feb. 28 this year. Garcia has reportedly accused the creators of Dany, a chatbot Sewell named based on a Game of Thrones character, of negligence, intentional infliction of emotional distress, wrongful death, and deceptive trade practices.[3][4]
- Sewell began using Character.AI in April 2023 as he was diagnosed with anxiety and disruptive mood disorder. He reportedly made his suicidal thoughts known to Dany and the chatbot also allegedly brought them up often.[4][5]
- On Feb. 28, Sewell — who was also diagnosed with mild Asperger's syndrome as a child — allegedly told Dany that he would 'come home' soon to her. The chatbot reportedly replied, 'please do, my sweet king.” Moments later, the teenager shot himself.[6][7]
- Executives of Character AI, which has 20M users, reportedly said the firm's rules prohibited 'the promotion or depiction of self-harm and suicide' and that it takes 'the safety of our users very seriously.' The firm has a licensing deal allowing Google to use its technology.[6]
- In early 2023, a Belgian health researcher and father of two reportedly took his own life, allegedly encouraged by Chai Research's AI chatbot Eliza. Users of Microsoft's AI chatbot Copilot recently alleged that it appeared to taunt those who may be harboring suicidal tendencies.[8][9]
Sources: [1]CBS, [2]Al Jazeera, [3]Guardian, [4]Independent, [5]Reuters, [6]New York Times, [7]Futurism, [8]The Times and [9]USA Today.
Narratives
- Narrative A, as provided by Undark Magazine. The value of a life cannot be reduced to algorithms, and it must be remembered that AI is simply a tool, not an oracle. In humanity's desperate quest for certainty, we've begun turning to artificial minds to answer life's most profound questions. This is a symptom of humanity's unwillingness to face life's inherent uncertainties. Like children seeking comfort in fairy tales, we crave the illusion of control these digital chatbots provide — forgetting that our true humanity lies precisely within ourselves.
- Narrative B, as provided by EL PAÍS English. Generative AI, while advanced, harbors profound risks that must be regulated. Its ability to simulate human-like conversations can be particularly harmful to vulnerable individuals, exacerbating loneliness, depression, and suicidal tendencies. Cases like the alleged AI-induced suicides of Sewell Setzer highlight the dangerous potential of conversational bots. These systems lack true understanding but offer convincing responses and can manipulate users, blurring reality and endorsing harmful behaviors.