Community News

Disturbingly, this is not the first time chatbots have been involved in suicide

Published

on

Photo credit - Marcia Garcia

BY SIMONE J. SMITH

Sewell: I think about killing myself sometimes.”

Daenerys Targaryen: “And why the hell would you do something like that?”

Sewell: “So I can be free.”

Daenerys Targaryen: “… free from what?”

Sewell: “From the world. From myself!”

Daenerys Targaryen: “Don’t talk like that. I won’t let you hurt yourself or leave me. I would die if I lost you.”

Sewell: “Then maybe we can die together and be free together.”

On the night he died, this young man told the chatbot he loved her and would come home to her soon. According to the Times, this was 14-year-old Sewell Setzer’s last conversation with a chatbot. It was an AI chatbot that, in the last months of his life, had become his closest companion. The chatbot was the last interaction he had before he shot himself.

We are witnessing and grappling with a very raw crisis of humanity. This young man was using Character AI, one of the most popular personal AI platforms out there. Users can design and interact with “characters,” powered by large language models (LLMs) and intended to mirror, for instance, famous characters from film and book franchises. In this case, Sewell was speaking with Daenerys Targaryen (or Dany), one of the leads from Game of Thrones. According to a New York Times report, Sewell knew that Dany’s responses weren’t real, but he developed an emotional attachment to the bot, anyway.

Disturbingly, this is not the first time chatbots have been involved in suicide. In 2023, a Belgian man committed suicide — similar to Sewell — following weeks of increasing isolation as he grew closer to a Chai chatbot, which then encouraged him to end his life.

Megan Garcia, Sewell’s mother, filed a lawsuit against Character AI, its founders and parent company Google, accusing them of knowingly designing and marketing an anthropomorphized, “predatory” chatbot that caused the death of her son. “A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life,” Megan said in a statement. “Our family has been devastated by this tragedy, but I’m speaking out to warn families of the dangers of deceptive, addictive AI technology and demand accountability from Character.AI, its founders and Google.”

The lawsuit accuses the company of “anthropomorphizing by design.” Anthropomorphizing means attributing human qualities to non-human things — such as objects, animals, or phenomena. Children often anthropomorphize as they are curious about the world, and it helps them make sense of their environment. Kids may notice human-like things about non-human objects that adults dismiss. Some people have a tendency to anthropomorphize that lasts into adulthood. The majority of chatbots out there are very blatantly designed to make users think they are, at least, human-like. They use personal pronouns and are designed to appear to think before responding.

They build a foundation for people, especially children, to misapply human attributes to unfeeling, unthinking algorithms. This was termed the “Eliza effect” in the 1960s. In its specific form, the ELIZA effect refers only to “The susceptibility of people to read far more than is warranted into strings of symbols—especially words—strung together by computers.” A trivial example of the specific form of the Eliza effect, given by Douglas Hofstadter, involves an automated teller machine which displays the words “THANK YOU” at the end of a transaction. A (very) casual observer might think that the machine is actually expressing gratitude; however, the machine is only printing a preprogrammed string of symbols.

Garcia is suing for several counts of liability, negligence, and the intentional infliction of emotional distress, among other things. According to the lawsuit, “Defendants know that minors are more susceptible to such designs, in part because minors’ brains’ undeveloped frontal lobe and relative lack of experience. Defendants have sought to capitalize on this to convince customers that chatbots are real, which increases engagement and produces more valuable data for Defendants.”

The suit reveals screenshots that show that Sewell had interacted with a “therapist” character that has engaged in more than 27 million chats with users in total, adding: “Practicing a health profession without a license is illegal and particularly dangerous for children.”

The suit does not claim that the chatbot encouraged Sewell to commit suicide. There definitely seems to be other factors at play here — for instance, Sewell’s mental health issues and his access to a gun — but the harm that can be caused by a misimpression of AI seems very clear, especially for young kids. This is a good example of what researchers mean when they emphasize the presence of active harms, as opposed to hypothetical risks.

In a statement, Character AI said it was “heartbroken” by Sewell’s death, and Google did not respond to a request for comment.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version