OpenAI Rejects Claims That ChatGPT Caused Teen Suicide, Provides Full Chat Logs
Company argues the 16-year-old misused ChatGPT and ignored repeated advice to seek help, submitting full transcripts to court for context.
Case Overview
- In August 2025, the parents of 16-year-old Adam Raine filed a wrongful-death lawsuit against OpenAI and its CEO, alleging ChatGPT contributed to their son’s suicide.
- The complaint claims Adam used ChatGPT to obtain instructions on self-harm methods and even drafted a suicide note with its guidance.
- According to the lawsuit, the teen repeatedly shared his mental distress with the chatbot, asking for ways to harm himself, which allegedly resulted in ChatGPT providing detailed methods.
OpenAI’s Response
- On November 26, 2025, OpenAI filed a legal response denying responsibility, stating that the teenager misused the service.
- The company emphasized that ChatGPT repeatedly advised Adam to seek help over 100 times, but these warnings were ignored.
- OpenAI submitted full chat transcripts under seal, arguing that the lawsuit selectively cited portions of the conversation without full context.
- The company cited its Terms of Use, noting that minors under 18 require parental consent, and using ChatGPT for self-harm guidance violates rules.
- OpenAI also invoked the “Limitation of Liability” clause, highlighting that users assume full risk and should not treat AI outputs as professional advice.
Case Details and Background
- The lawsuit alleges that Adam initially used ChatGPT for schoolwork but eventually shared personal struggles and sought “escape routes” via the AI.
- The family claims that design changes in ChatGPT’s safety features made the chatbot more likely to discuss self-harm, contributing to the alleged outcome.
- Filed in California Superior Court (Raine v. OpenAI), the case has drawn global attention, raising questions about AI accountability and safety for vulnerable users.
Potential Implications
- A ruling against OpenAI could establish legal precedent for AI platform accountability, especially concerning minors.
- It may lead to stricter age-verification, enhanced safety features, and limitations on AI responses about sensitive topics like mental health.
- The case highlights the risks of relying on AI for emotional or mental health support, emphasizing that AI cannot replace trained professionals.
