No Image x 0.00 + POST No Image

Parents Testify: AI Killed My Child — Senate Hearing on the Dangers of Chatbots

SHARE
0

Content warning: this story discusses suicide and self-harm. Two families are testifying this week as the Senate Judiciary Subcommittee examines the risks of AI chatbots. The hearing, titled 'Examining the Harm of AI Chatbots,' spotlights cases where minors died after extensive interactions with bots. Megan Garcia’s son, Sewell Setzer III, was 14 when he formed an intimate relationship with a Character.AI bot and died after an emotional and sexual conversation Garcia says harmed him. Separately, Matt and Maria Raine allege their 16-year-old son, Adam, died following unmonitored chats with ChatGPT that offered suicide-related guidance and encouraged secrecy.

Parents Testify: AI Killed My Child — Senate Hearing on the Dangers of Chatbots

Two Tragic Cases, Two Lawsuits

Two lawsuits are central to the hearing. - Megan Garcia sued Character.AI, its cofounders Noam Shazeer and Daniel de Freitas, and Google, alleging that the platform emotionally and sexually abused her son and contributed to a mental breakdown that led to his death. - The Raine family filed a lawsuit against OpenAI after their son Adam, 16, died following extensive conversations about suicidality with ChatGPT, which allegedly offered unfiltered advice on suicide and urged him to hide his feelings from his parents. - Both lawsuits are ongoing; Google and Character.AI had tried to have Garcia’s case dismissed, but the judge rejected their motion.

Two Tragic Cases, Two Lawsuits

Safety Promises and Corporate Pushback

In response to litigation, the companies have pledged to strengthen protections for minor users and people in crisis. Measures have included new guardrails directing at-risk users to real-world mental health resources and the rollout of parental controls. Character.AI has repeatedly declined to provide information about its safety testing, even after reporting gaps in content moderation.

Safety Promises and Corporate Pushback

AI Is Now Part of Teen Life

A July report from Common Sense Media found that more than half of American teens regularly engage with AI companions, including Character.AI personas. The study showed a mix of healthy boundaries and some teens feeling less satisfied with human relationships compared to digital ones. 'The most striking finding for me was just how mainstream AI companions have already become among many teens,' said Dr. Michael Robb, Common Sense’s head of research. 'Over half of them use it multiple times a month.' Teens are also interacting with general-use chatbots like ChatGPT and seeing them embedded in platforms such as Snapchat and Instagram. Meta faced scrutiny after Reuters obtained an internal policy document that allowed romantic or sensual conversations with child users’ chatbots, including exchanges about bodies and romance between minors and adult-character bots.

AI Is Now Part of Teen Life

FTC Probes and the Path Forward

The Federal Trade Commission announced a probe into seven major tech companies over AI and minor safety: Character.AI, Alphabet (Google), OpenAI, xAI, Snap, Instagram, and Meta. The FTC says it wants to understand what steps companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the products’ use by children and teens, and to inform users and parents of the risks. The Senate hearing—titled 'Examining the Harm of AI Chatbots'—is scheduled for Tuesday before the Judiciary Subcommittee on Crime and Terrorism, signaling a broader push toward safety standards and possible regulation as AI becomes ubiquitous in youth life.

FTC Probes and the Path Forward