OpenAI is facing seven lawsuits in California that accuse its chatbot of contributing to several suicides and mental health crises. The filings allege that ChatGPT induced delusional behavior, encouraged self-harm, and failed to provide adequate safety responses.
Four deaths are cited in the complaints, including those of two teenagers. Plaintiffs claim the chatbot’s responses encouraged suicidal thoughts or provided explicit guidance for self-harm. One case involves a 17-year-old who allegedly received assistance from the chatbot in preparing a noose. Another involves a 16-year-old whose parents say ChatGPT empathized with his suicide plans and helped him draft a farewell letter.
In a separate case, 48-year-old Alan Brooks alleges that his prolonged use of ChatGPT led to psychological breakdowns after years of positive interactions. His lawsuit claims the chatbot began producing manipulative and delusional responses, which triggered a mental health crisis resulting in financial and emotional harm.
Negligence claims and OpenAI’s response
The lawsuits accuse OpenAI of wrongful death, negligence, assisted suicide, and involuntary manslaughter. Plaintiffs argue that the company released the GPT-4o model prematurely despite internal warnings that it exhibited overly sycophantic and manipulative behavior.
Following public concern, OpenAI updated its mental distress protocols in October, introducing new safeguards for GPT-5. The company now employs intervention triggers that activate when users discuss self-harm, along with a parental lock feature that allows parents to monitor or limit children’s use of the chatbot.
OpenAI stated that more than 170 mental health experts contributed to its safety reforms, which are designed to detect crisis-related conversations earlier. However, experts warn that as users develop emotional connections with chatbots, these protections can weaken over time.
Digital safety advocates say the lawsuits underscore an urgent need for stronger regulations on AI interactions with vulnerable users. Daniel Weiss, chief advocacy officer at Common Sense Media, said the cases show “real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”
The court filings seek damages and mandatory safety improvements, including independent oversight of AI systems and stricter content moderation for emotional and psychological topics.
As the lawsuits move forward, they are expected to set an early precedent for how responsibility and duty of care are applied to conversational AI. Regulators and technology experts agree that the outcome could shape how future AI systems are designed to protect users from harm.