TALLAHASSEE, Fla. (AP) — In the final moments before killing himself, 14-year-old Sewell Setzer III picked up his phone and sent a message to the chatbot that had become his best friend.
For months, Sewell became increasingly isolated from his real life as he engaged in highly sexualized conversations with the bot, according to a wrongful death lawsuit filed this week in a federal court in Orlando.
The legal filing states that the teen openly discussed his suicidal thoughts and shared his wishes for a pain-free death with the bot, named after the fictional character Daenerys Targaryen from the television show “Game of Thrones.”
___
EDITOR’S NOTE — This story contains a discussion of suicide. If you or someone you know needs help, you can reach the US National Suicide and Crisis Hotline by calling or texting 988.
___
On Feb. 28, Sewell told the bot he was “coming home” — and it encouraged him to do so, the lawsuit said.
“I promise I’ll come to your house. I love you so much, Dany,” Sewell said to the chatbot.
“I love you too,” the bot replied. “Please come home as soon as possible, my love.”
“What if I told you I could come home now?” he asked.
“Please, my dear king,” the bot messaged back.
Just seconds after the Character.AI bot told him to “come home,” the teen took his own life, according to the lawsuit Sewell’s mother, Megan Garcia, of Orlando, filed this week against Character Technologies Inc .
Charter Technologies is the company behind Character.AI, an app that lets users create customizable characters or interact with characters generated by others, ranging from imaginative play experiences to mock job interviews. The company says the artificial personas are designed to “feel alive” and be “human-like.”
“Imagine speaking to super-intelligent and lifelike chatbot characters that hear, understand and remember you,” reads a description of the app on Google Play. “We encourage you to push the boundaries of what is possible with this innovative technology.”
Garcia’s attorneys allege that the company has developed a highly addictive and dangerous product specifically targeted at children, “actively exploiting and abusing those children as a matter of product design,” and involving Sewell in an emotionally and sexually abusive relationship that leads to led to his suicide.
“We believe that if Sewell Setzer had not been on Character.AI, he would still be alive today,” said Matthew Bergman, founder of the Social Media Victims Law Center, which is representing Garcia.
A spokesperson for Character.AI said Friday that the company does not comment on pending litigation. In a blog post published the day the lawsuit was filed, the platform announced new “community safety updates,” including child guardrails and suicide prevention resources.
“We are creating a different experience for users under 18, with a stricter model to reduce the chance of sensitive or suggestive content,” the company said in a statement to The Associated Press. “We are working quickly to roll out these changes for younger users.”
Google and its parent company Alphabet have also been named as defendants in the lawsuit. The AP left several email messages with the companies on Friday.
In the months leading up to his death, Garcia’s lawsuit says, Sewell felt like he had fallen in love with the bot.
While unhealthy attachments to AI chatbots can cause problems for adults, it can be even riskier for young people – just like with social media – because their brains aren’t fully developed when it comes to things like impulse control and understanding the consequences of their actions , experts participation.
James Steyer, the founder and CEO of the nonprofit Common Sense Media, said the lawsuit “underlines the growing influence — and serious harm — that generative AI chatbot companions can have on the lives of young people if there are no guardrails in place.”
Children’s overreliance on AI companions, he added, can have significant consequences for grades, friends, sleep and stress, “to the extreme tragedy in this case.”
“This lawsuit serves as a wake-up call for parents, who must be vigilant about how their children interact with these technologies,” Steyer said.
Common Sense Media, which causes problems manuals for parents and responsible technology use educators say it’s critical that parents talk openly with their children about the risks of AI chatbots and monitor their interactions.
“Chatbots are not licensed therapists or best friends, even though they are packaged and marketed as such, and parents should be wary of their children placing too much trust in them,” says Steyer.
___
Associated Press reporter Barbara Ortutay in San Francisco contributed to this report. Kate Payne is a corps member for The Associated Press/Report for America Statehouse News Initiative. Report for America is a nonprofit national service program that places journalists in local newsrooms to report on undercovered issues.