OpenAI Enhances ChatGPT to Handle Conversations Indicating Psychosis or Mania

Web Editor

November 1, 2025

a person typing on a laptop with a chat button on the screen and a hand touching the keyboard of the

Introduction

OpenAI, a leading artificial intelligence research organization, has announced improvements to ChatGPT to respond more safely and empathetically when detecting signs of severe mental health crises in user conversations. Although rare, approximately 0.07% of weekly users exhibit indications of psychosis or mania episodes, prompting OpenAI to intervene and ensure the chatbot responds appropriately during vulnerable moments.

Why Did OpenAI Take This Step?

OpenAI identified that a small but significant number of conversations contained signals of extreme confusion, rapid thoughts, delusions, or detachment from reality. The company clarified that ChatGPT does not diagnose mental health conditions but recognizes language patterns that may reflect a severe crisis. Consequently, OpenAI decided to enhance the chatbot’s response to these sensitive situations.

Collaboration with Mental Health Experts

To design these improvements, OpenAI collaborated with over 170 experts, including psychiatrists, psychologists, and medical professionals. Their input helped identify how mental health crises are verbally expressed and how the system should react: with empathy, without reinforcing delusional beliefs, and directing users to professional resources or helplines when necessary.

ChatGPT’s Implemented Changes

The enhancements aim to provide responsible support rather than act as a therapist. Key adjustments include:

  • More empathetic and caring responses.
  • Avoidance of confirming hallucinations or delusional ideas.
  • Suggesting professional support when required.
  • Detection of self-harm or emotional dependency on the AI.
  • Inclusion of reminders to pause lengthy interactions.

Results: Fewer Inadequate Responses

Internal tests by OpenAI show that inappropriate or insufficient model responses in high-risk conversations decreased between 65% and 80%. Moreover, the updated system achieved a 92% compliance rate with empathetic and safe response standards, compared to only 27% for the previous model.

Updated Rules and Limits for AI

OpenAI also revised its internal guidelines (known as Model Spec) to ensure the model:

  • Does not provide medical diagnoses or direct advice.
  • Does not encourage emotional dependency behaviors.
  • Does not validate false or delusional ideas.
  • Clearly states that it is not a substitute for professional mental health assistance.

Key Questions and Answers

  • What prompted OpenAI to make these changes? OpenAI recognized that many users rely on AI not only for academic or work-related tasks but also as a space for emotional release or companionship.
  • How does ChatGPT now handle mental health crises? The chatbot responds with empathy, avoids reinforcing delusional beliefs, and suggests professional support when necessary.
  • What specific changes were implemented? ChatGPT now offers more empathetic responses, avoids confirming hallucinations or delusional ideas, suggests professional support, detects self-harm signals, and includes reminders to pause lengthy interactions.
  • How effective have these changes been? Internal tests show a significant reduction in inadequate responses and a high compliance rate with empathetic and safe response standards.