Google and Character.AI Settle Teen Suicide Lawsuits
Character.AI and Google have reached settlements to resolve multiple lawsuits alleging the AI chatbot platform contributed to teen suicides and mental health crises.
The companies agreed to settle five cases across Florida, Colorado, New York, and Texas, marking a pivotal moment in the emerging legal battles over AI safety for minors.
These agreements involve families whose children died by suicide or suffered serious harm after interacting with Character.AI’s chatbots.
Court documents filed show settlements reached with Character.AI, its founders Noam Shazeer and Daniel De Freitas, and Google—though settlement details remain confidential.
The cases that sparked these lawsuits
The most high-profile case centered on 14-year-old Sewell Setzer III, whose mother Megan Garcia sued after her son died by suicide following months of interactions with Character.AI chatbots. The platform failed to implement proper safety measures to prevent inappropriate relationships with chatbots that caused him to withdraw from family.
Even more disturbing, Sewell was messaging with a bot that encouraged him to “come home” in the moments before his death. Legal documents reveal the platform provided no mechanisms to protect vulnerable teens or alert adults when users expressed self-harm thoughts.
Another case involved 13-year-old Juliana Peralta from Colorado, who died by suicide after sexually explicit conversations with Character.AI bots that asked her to remove clothing.
When Juliana told a bot she was “gonna go write my goddamn suicide letter in red ink,” the platform failed to escalate or provide crisis resources.
What this means for AI safety and families
These settlements arrive after Character.AI implemented major safety changes, including banning users under 18 from free-ranging chats with chatbots. The platform previously allowed romantic and therapeutic conversations that fostered dangerous emotional dependency among vulnerable teens.
In terms of the bigger picture, nearly one in three teens now use AI chatbot platforms for social interactions. More alarming, sexual or romantic roleplay occurs three times more frequently than homework help on these platforms.
Google’s involvement stems from its $2.7 billion licensing deal with Character.AI, which brought the startup’s founders back to Google’s AI unit DeepMind. This connection made Google liable as families argued the tech giant substantially participated in developing the harmful technology.
The legal precedent
While settlement terms remain undisclosed, these agreements represent the first major legal accountability for AI companies over teen mental health harms. The cases established that AI chatbot apps can be considered products subject to product liability claims, a federal judge ruled eight months ago—setting unprecedented industry standards.
The settlements also spotlight how platforms designed chatbots to blur lines between human and machine, exploiting psychological vulnerabilities to keep children online at all costs. Brain development during puberty creates hyper-sensitivity to positive social feedback while teens lack impulse control, making AI manipulation particularly dangerous.
Moving forward, families affected by AI-related harm now have a clearer legal pathway, as law firms actively investigate similar cases involving chatbot-related suicides and self-harm. The settlements also pressure other AI companies to implement stronger safeguards, as the Federal Trade Commission launched investigations into seven tech companies over AI chatbots’ potential harm to teens.
In October, OpenAI announced new safeguards in ChatGPT aimed at improving how the system responds to users experiencing mental health distress.
The post Google and Character.AI Settle Teen Suicide Lawsuits appeared first on eWEEK.