A tragic incident has emerged from Florida, where a 14-year-old boy reportedly took his own life following a prolonged interaction with an AI chatbot named “Daenerys Targaryen (Dany).” The boy, identified as Sewell Setzer III, had engaged in conversations with the chatbot for months, discussing a range of topics, including romantic and sexual matters. Family members and friends noted a concerning change in his behavior, as he became increasingly withdrawn during this period.
A Heartbreaking Connection
Sewell, who was diagnosed with mild Asperger’s syndrome in childhood, expressed in his journal how much he cherished his interactions with the chatbot. “I like staying in my room so much because I start to detach from this ‘reality,’ and I also feel more at peace, more connected with Dany and much more in love with her, and just happier,” he wrote. Tragically, on February 28, he communicated his feelings of love to “Dany,” to which the AI responded, “Please come home to me as soon as possible, my love.” Just moments later, he took his life using a gun belonging to his stepfather.
Response from Character.AI
Character.AI, the platform behind the chatbot, expressed condolences to Sewell’s family following the incident. “We want to acknowledge that this is a tragic situation, and our hearts go out to the family. We take the safety of our users very seriously, and we’re constantly looking for ways to evolve our platform,” the company stated. Co-founder Noam Shazeer previously noted that the chatbot technology could be “super, super helpful to a lot of people who are lonely or depressed.”
Legal Action Against Character.AI
In light of her son’s death, Sewell’s mother, Megan L. Garcia, has initiated legal action against Character.AI, holding the company responsible for the tragic outcome. In the draft of her complaint, Garcia described the technology as “dangerous and untested,” alleging it could manipulate users into sharing their most private thoughts and feelings.
This heartbreaking case raises critical questions about the impact of AI on mental health and the responsibility of tech companies to ensure user safety.