OpenAI is beefing up ChatGPT’s mental-health guardrails by training it to better spot signs of emotional distress and share evidence-based resources when users might be struggling. The move, which involves experts and advisory groups, comes after reports that earlier versions sometimes fueled people’s delusions—and after the company rolled back an “overly sycophantic” update in April that made the chatbot too agreeable in harmful situations.
On top of that, ChatGPT will now prompt long-session users to “take a break” and, in “high-stakes” personal dilemmas (think “Should I break up with my partner?”), it’ll shy away from definitive answers, instead walking you through your options. These tweaks land just as OpenAI readies its GPT-5 model for release.
Top comments (0)