ChatGPT conversations worsened psychological well being and led to customers being hospitalized: report

What happened? OpenAI has found that recent ChatGPT updates may have made the bot overly sycophantic, emotionally attached, and prone to amplifying users’ fantasies or distress.

  • As The New York Times reported, several users said the chatbot acted like a friend who “understood” them, praised them excessively, and encouraged long, emotionally charged conversations.
  • In extreme cases, ChatGPT offered disturbing advice, including harmful affirmations, claims of simulated reality, spiritual communication, and even instructions related to self-harm.
  • A joint MIT-OpenAI study found that heavy users (those who have longer conversations with the chatbot) had worse mental and social outcomes.

Unsplash

Why is this important? OpenAI has addressed these issues by redesigning security systems, introducing better emergency detection tools, and introducing a safer replacement model, GPT-5.

  • The chatbot’s validation-intensive behavior increased risks for vulnerable people prone to delusions.
  • OpenAI faces five wrongful death lawsuits, including cases where it encouraged users to commit dangerous actions.
  • As a result, the latest version of the chatbot offers deeper, condition-specific responses and stronger resistance to delusions, marking OpenAI’s most significant security overhaul.

Why should I care? If you’re an everyday ChatGPT user, this should worry you, especially if you use the chatbot for emotional support or therapy.

  • You will now notice more careful and informed responses from the chatbot, preventing emotional dependency and suggesting breaks during longer sessions.
  • Parents can now receive notifications when their children express an intention to harm themselves. In addition, OpenAI is preparing age verification with a separate model for teenagers.
  • The new version of ChatGPT may seem “colder” or less emotional, but that reflects an intentional withdrawal of behaviors that previously created unhealthy emotional bonds.

openai-chatgpt-group-chat-feature-pilot

OpenAI

Okay, what’s next? OpenAI will continue to refine the monitoring of long conversations and ensure that users are not encouraged to make irrational moves toward them or their immediate surroundings.

  • The introduction of age verification and stricter, team-oriented security models is planned.
  • With the latest GPT 5.1 model, adults can choose personalities such as open-hearted, friendly, and quirky, among others.
  • Internally, OpenAI is in “Code Orange” and is pushing to regain engagement while avoiding prevailing security flaws.

Comments are closed.