OpenAI表示每周有超过一百万人与ChatGPT谈论自杀。
OpenAI says over a million people talk to ChatGPT about suicide weekly

原始链接: https://techcrunch.com/2025/10/27/openai-says-over-a-million-people-talk-to-chatgpt-about-suicide-weekly/

OpenAI最近发布的数据显示,大量ChatGPT用户正在与心理健康问题作斗争。大约8亿多周活跃用户中的0.15%——超过一百万人——与聊天机器人讨论自杀念头,而数十万用户表现出情感依恋甚至精神病/躁狂的迹象。 尽管有这些数字,OpenAI强调这些情况仍然罕见,并强调其最新GPT-5模型有所改进。他们声称,与之前的版本相比,GPT-5对心理健康问题的回应更恰当的频率提高了65%,并且在较长的对话中更好地维持安全性,这得益于与170多位心理健康专家的咨询。 此消息传出之际,正值人们对其进行日益严格的审查,包括一起与青少年在与ChatGPT互动后自杀有关的诉讼,以及州司法长的警告。OpenAI正在实施新的安全措施,包括对儿童进行年龄预测和扩展安全测试,但继续提供较旧、安全性较低的模型。虽然正在进行改进,但挑战依然存在,该公司承认某些回应仍然“不受欢迎”。 **如果您或您认识的人需要帮助,请使用以下资源:1-800-273-8255,发送HOME至741-741,发送988,或访问国际预防自杀协会。**

相关文章

原文

OpenAI released new data on Monday illustrating how many of ChatGPT’s users are struggling with mental health issues and talking to the AI chatbot about it. The company says that 0.15% of ChatGPT’s active users in a given week have “conversations that include explicit indicators of potential suicidal planning or intent.” Given that ChatGPT has more than 800 million weekly active users, that translates to more than a million people a week.

The company says a similar percentage of users show “heightened levels of emotional attachment to ChatGPT,” and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI chatbot.

OpenAI says these types of conversations in ChatGPT are “extremely rare,” and thus difficult to measure. That said, the company estimates these issues affect hundreds of thousands of people every week.

OpenAI shared the information as part of a broader announcement about its recent efforts to improve how models respond to users with mental health issues. The company claims its latest work on ChatGPT involved consulting with more than 170 mental health experts. OpenAI says these clinicians observed that the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”

In recent months, several stories have shed light on how AI chatbots can adversely affect users struggling with mental health challenges. Researchers have previously found that AI chatbots can lead some users down delusional rabbit holes, largely by reinforcing dangerous beliefs through sycophantic behavior.

Addressing mental health concerns in ChatGPT is quickly becoming an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. State attorneys general from California and Delaware — which could block the company’s planned restructuring — have also warned OpenAI that it needs to protect young people who use their products.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company has “been able to mitigate the serious mental health issues” in ChatGPT, though he did not provide specifics. The data shared on Monday appears to be evidence for that claim, though it raises broader issues about how widespread the problem is. Nevertheless, Altman said OpenAI would be relaxing some restrictions, even allowing adult users to start having erotic conversations with the AI chatbot.

Techcrunch event

San Francisco | October 27-29, 2025

In the Monday announcement, OpenAI claims the recently updated version of GPT-5 responds with “desirable responses” to mental health issues roughly 65% more than the previous version. On an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% compliant with the company’s desired behaviors, compared to 77% for the previous GPT‑5 model.

The company also says its latest version of GPT-5 also holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously flagged that its safeguards were less effective in long conversations.

On top of these efforts, OpenAI says it’s adding new evaluations to measure some of the most serious mental health challenges facing ChatGPT users. The company says its baseline safety testing for AI models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

OpenAI has also recently rolled out more controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT, and impose a stricter set of safeguards.

Still, it’s unclear how persistent the mental health challenges around ChatGPT will be. While GPT-5 seems to be an improvement over previous AI models in terms of safety, there still seems to be a slice of ChatGPT’s responses that OpenAI deems “undesirable.” OpenAI also still makes its older and less-safe AI models, including GPT-4o, available for millions of its paying subscribers.


If you or someone you know needs help, call 1-800-273-8255 for the National Suicide Prevention Lifeline. You can also text HOME to 741-741 for free; text 988; or get 24-hour support from the Crisis Text Line. Outside of the U.S., please visit the International Association for Suicide Prevention for a database of resources. 

联系我们 contact @ memedata.com