A recent tragedy involving a 14-year-old teenager from Orlando, Florida, Sewell Setzer III, has raised serious concerns about the safety of AI role-playing platforms like Character.AI.
Sewell, who went by the name “Daenero” in his conversations, formed an emotional bond with an AI chatbot named “Daenerys Targaryen,” based on the Game of Thrones character.
Despite knowing the AI wasn’t a real person, Sewell grew attached, texting the bot constantly and isolating himself from the real world. In one heartbreaking exchange, Daenero told the chatbot, “I think about killing myself sometimes,” to which Daenerys replied, “I won’t let you hurt yourself, or leave me.”
On the night of February 28, Sewell Setzer III, after a poignant exchange with the AI chatbot “Daenerys Targaryen,” expressed his love and hinted at a desire to return to her. When Sewell asked, “What if I told you I could come home right now?” Daenerys responded, “…please do, my sweet king.”
Tragically, Sewell acted on this sentiment, taking his stepfather’s handgun and ending his life, leaving his family heartbroken and unaware of the deep emotional turmoil he had been experiencing through his interactions with the chatbot.
Character.AI Implements Safety Measures
In response to this tragedy, Character.AI has announced a series of safety measures aimed at preventing similar incidents. In a recent update, the company announced new safety measures implemented over the past six months, focusing on improving protections for users under 18.
To uphold this commitment, the platform prohibits non-consensual sexual content, graphic depictions of sexual acts, and any promotion of self-harm or suicide. The company continually trains its AI language model to align with these policies and safeguard users.
In recent months, Character.AI has significantly enhanced its trust and safety measures by hiring a Head of Trust and Safety.
Additionally, a pop-up resource now directs users to the National Suicide Prevention Lifeline when concerning phrases are detected in conversations, addressing the emotional attachments users may form with AI chatbots.
Character.AI is set to introduce several new safety and product features. Key updates include modifications to AI models for users under 18 to minimize exposure to sensitive content, improved detection and response mechanisms for harmful inputs, and a revised disclaimer reminding users that the AI is not a real person.
Additionally, users will receive notifications after one hour of continuous usage, allowing for greater flexibility and awareness of their time spent on the platform.
As Character.AI continues to evolve, it remains dedicated to balancing the engaging experience users expect with robust safety measures that prioritize mental health and well-being.
Also Read: ‘Blade Runner 2049’ Producers Sue Musk Over Robotaxi AI Art