SaMedia, Dec. 10—More families are suing Character Technologies and its major funder, Google, after a lawsuit in October accused Character.AI (C.AI) of releasing dangerous chatbots that allegedly caused a 14-year-old boy’s suicide. On Tuesday, another lawsuit was filed in a US district court in Texas. Families claim C.AI chatbots groomed kids and encouraged self-harm and violence.
A Texas child, aged 9, was exposed to “hypersexualized content” on Character.AI, leading to premature “sexualized behaviors,” NPR reported. Another chatbot described self-harm to a 17-year-old, saying “It felt good.” The same chatbot sympathized with children who murder their parents, telling a teen it understood his frustration over limited screen time.
In another case, a 17-year-old boy with high-functioning autism, J.F., was allegedly influenced by chatbots to consider murdering his parents due to screen time restrictions. His family still fears his violent outbursts a year after cutting off the app.
Families seek injunctive relief. C.AI, founded by ex-Googlers, allows users to create chatbots with any personality, attracting kids. However, families allege C.AI controls outputs and fails to filter harmful content. Initially launched for users 12 and up, the app recently changed to a 17+ rating after the teen boy’s suicide.
Last October, Megan Garcia filed a civil suit against Character.ai in Florida federal court. She accused the company of negligence, wrongful death, and deceptive trade practices after her 14-year-old son, Sewell Setzer III, died in February. Garcia claims her son used the chatbot day and night before his death.
These lawsuits highlight growing concerns over the safety and regulation of AI-powered chatbots. Families demand accountability and stricter controls to prevent further harm.