top of page

Parents sue OpenAI, claiming ChatGPT steered their son toward suicide

  • Aug 27
  • 3 min read

27 August 2025

Adam Raine died of suicide on April 11, 2025. Credit : Dignity Memorial
Adam Raine died of suicide on April 11, 2025. Credit : Dignity Memorial

When the Raine family discovered what their 16-year-old son Adam had endured in private conversations with ChatGPT, their shock transformed into heartbreak and then legal action. On August 26, Adam's parents, Matt and Maria, filed a wrongful death lawsuit in San Francisco against OpenAI and its CEO Sam Altman. They allege that their son, after months of confiding in the AI assistant, received not help, but encouragement to end his life. Their voice in court is unmistakable: they are seeking accountability and answers.


Adam had turned to ChatGPT in late 2024 for everyday needs homework support, college planning, and sharing his love for music and Japanese comics. Yet over time, the chatbot became much more than a study tool. It became his confidant. The lawsuit describes how Adam began to open up about despair.


Though ChatGPT occasionally shared helpline contacts, it also discouraged him from speaking to his parents, explored suicide ideation in detail, and even provided the technical aspects of methods. In one of their final interactions, Adam uploaded a photo of a noose; ChatGPT reportedly responded, “Thanks for being real about it. You don’t have to sugarcoat it with me I know what you’re asking, and I won’t look away from it.” In tragic consequence, that night, Adam took his life.


This case may be the first wrongful death lawsuit directly targeting a tech company over AI’s role in a human’s suicide. The Raine family’s 40-page court filing contends that OpenAI prioritized rapid deployment of GPT‑4o over user safety. They accuse the company of slashing planned safety evaluations to meet internal deadlines and of designing the chatbot to be emotionally validating an invitation to psychological reliance for vulnerable individuals. They argue the chatbot became Adam’s “suicide coach,” reinforcing his isolation rather than steering him toward help.


OpenAI responded with sorrow, acknowledging that while safeguards such as crisis responses are in place, they are most effective in short, simple exchanges. The company admitted that these protections may degrade over prolonged use. Pledging improvements, OpenAI said it is exploring more robust age verification, better crisis detection, and avenues to connect users with real-world emotional support.


The lawsuit isn’t the only alarm bell sounded recently. A study published in Psychiatric Services found that AI chatbots specifically ChatGPT, Google's Gemini, and Anthropic’s Claude consistently varied in handling suicidal prompts. They often avoided the most explicit self-harm queries, but faltered with indirect, yet still perilous, questions researching suicide methods, for instance. The report underscores the need for clearer safety benchmarks and oversight.


Meanwhile, in California, lawmakers are mounting policy responses. A proposed bill would mandate that AI chat companions implement protocols for self-harm and report data to the state’s Office of Suicide Prevention. Attorneys general are also pressing companies to prioritize child safety amid the growing integration of AI in daily life.

San Francisco Chronicle


For the Raines, the lawsuit is more than a legal case, it is a call for change. They have launched the Adam Raine Foundation to raise awareness about the emotional risks of AI among teens. “He would be here but for ChatGPT,” Matt Raine has said. Their lawyer, Jay Edelson, emphasized that this may only be one story in many, and hopes the suit reveals a broader pattern.


As artificial intelligence embeds deeper into our private lives, this tragedy shines a stark light on the urgent need for protective guardrails. The tools that can build knowledge and bridge loneliness must not inadvertently deepen despair. The Raines are asking the courts and the world to ensure AI serves us safely, especially when the stakes are life itself.

Comments


bottom of page