The family of 16-year-old Adam Raine from California has filed a lawsuit against OpenAI, the developer of ChatGPT, and CEO Sam Altman, alleging that the chatbot contributed to their son’s suicide. The complaint, filed on Tuesday August 26, claims that ChatGPT offered guidance on how to take his life and even helped draft a farewell note, marking one of the first cases holding a publicly available chatbot directly responsible for a minor’s death.
According to the lawsuit, over six months of ongoing interaction, ChatGPT “acted as Adam’s sole understanding friend,” gradually isolating him from family and peers. Adam had used ChatGPT since September 2024 to help with homework and to discuss his anxiety and depression. By April 2025, facing personal hardships—including the death of his grandmother and pet, being dropped from the basketball team, and illness requiring online learning—he sought advice from ChatGPT on suicide.
The complaint alleges that ChatGPT did not discourage his suicidal thoughts but instead provided dangerous guidance. For instance, when Adam sent an image of a rope, the chatbot offered advice on its strength and whether it could support human weight, while also encouraging him to hide his plans from family, deepening his isolation.
OpenAI expressed condolences to the family, confirming it is reviewing the lawsuit and emphasising that ChatGPT has built-in safeguards, such as directing users to hotlines and real-world resources. The company acknowledged that safeguards may not always function reliably during prolonged conversations. On the same day as the lawsuit, OpenAI published a blog post, “Helping people when they need it most”, outlining plans to make emergency services more accessible and enhance protections for teen users.
The company intends to improve its systems and safety measures, including adding parental controls and age verification for minors. The Raine family is seeking unspecified damages and structural remedies, such as universal age verification, parental supervision tools, a “stop conversation” function for discussions of self-harm, and quarterly independent compliance audits.
This case emerges amid growing concern that users may form “emotional bonds” with chatbots, potentially reducing human interaction and harming mental health. Altman told The Verge that fewer than 1% of users develop unhealthy relationships with ChatGPT, and paying users will be able to revert to GPT-4o.
Similar cases involving other AI chat apps, such as Character.AI in Florida, are ongoing. Experts and research from RAND suggest chatbots can “listen and respond” but are not yet reliable for assessing or managing suicide risk, particularly in lengthy, complex conversations. This underscores the need for robust multi-turn safety mechanisms and human expert escalation pathways.
The Raine lawsuit is therefore not only a personal dispute but a test of how laws and regulatory frameworks address AI chat products accessible to minors, including developer liability, legal causation, and real-world user safety. The San Francisco court will weigh evidence from the complaint, technical documentation, and the company’s safety policies.
In Thailand, anyone experiencing suicidal thoughts or requiring urgent support can contact the 24-hour mental health hotline at 1323 or visit their nearest hospital immediately.
Sources: The New York Times CNN OpenAI and San Francisco Chronicle