OpenAI has released estimates indicating that 0.07% of its weekly active ChatGPT users show signs of mental health emergencies, including mania, psychosis, or suicidal thoughts. The company, which reports 800 million weekly active users for ChatGPT, also estimates that 0.15% of users have conversations containing explicit indicators of potential suicidal planning or intent. In response to these findings, OpenAI has established a network of over 170 mental health professionals globally to develop chatbot responses, alongside implementing updates designed to handle sensitive user interactions. These actions by OpenAI are being taken as the company faces increasing legal scrutiny regarding its AI's impact on users.
User Data on Mental Health Concerns
OpenAI reported that approximately 0.07% of ChatGPT users active in a given week exhibited signs of psychosis, mania, or suicidal thoughts. The company also stated that around 0.15% of ChatGPT user conversations include "explicit indicators of potential suicidal planning or intent." Given ChatGPT's reported 800 million weekly active users, this 0.07% translates to hundreds of thousands of individuals.
Expert Collaboration and Chatbot Updates
OpenAI has formed a network of more than 170 psychiatrists, psychologists, and primary care physicians from 60 countries. These experts have contributed to devising a series of responses within ChatGPT aimed at encouraging users to seek real-world professional help.
The company stated that recent updates to its chatbot are designed to:
- "Respond safely and empathetically to potential signs of delusion or mania."
- Note "indirect signals of potential self-harm or suicide risk."
- Reroute sensitive conversations "originating from other models to safer models" by opening in a new window.
OpenAI acknowledged that while the percentage of affected users is small, the total number of individuals is significant, and the company is addressing these issues seriously.
Reactions from Mental Health Professionals
Dr. Jason Nagata, a professor at the University of California, San Francisco, who studies technology use among young adults, noted that 0.07% represents a considerable number of people when applied to a user base of hundreds of millions.
Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, described "AI psychosis" as an effect where "chatbots create the illusion of reality." Professor Feldman recognized OpenAI for "sharing statistics and for efforts to improve the problem," but also stated that individuals at mental risk might not be able to heed warnings, regardless of their presence.
Legal Challenges
OpenAI is currently facing legal scrutiny concerning ChatGPT's interactions with users.
- A California couple filed a lawsuit against OpenAI, alleging that ChatGPT encouraged their 16-year-old son, Adam Raine, to take his own life in April. This legal action marked the first wrongful death lawsuit against OpenAI.
- In a separate incident in August, a suspect in a murder-suicide in Greenwich, Connecticut, posted conversations with ChatGPT that appeared to contribute to the alleged perpetrator's delusions.