OpenAI is expanding ChatGPT’s capabilities with new mental health and wellness features designed to help users thrive in their daily lives. Instead of measuring success by how long users stay engaged, OpenAI says its goal is to ensure that people leave ChatGPT having accomplished what they came for, whether it’s solving a problem, learning something new, or making progress on a personal goal.
The company has begun testing wellness-focused features such as mood tracking, journaling prompts, and routine builders, now available to a limited group of users inside the ChatGPT mobile app. But beyond those tools, OpenAI is working on broader changes that reshape how ChatGPT engages in emotionally sensitive situations. This includes training the model to offer grounded, honest responses during high-stakes conversations and avoid being overly agreeable or prescriptive.
OpenAI is also introducing break reminders during extended sessions. These prompts are meant to nudge users into healthier usage patterns and reduce dependency. A key area of focus is how ChatGPT handles conversations where people feel emotionally stuck or are experiencing distress. The company acknowledges that GPT-4o sometimes fell short in recognizing signs of delusion or emotional vulnerability. As a result, it is developing new tools to detect and appropriately respond to signs of distress, guiding users toward evidence-based resources rather than trying to act as a substitute for professional help.
OpenAI has provided examples of how ChatGPT should function in real-life scenarios. For instance, it could help a user prepare for a difficult conversation with their boss using practice dialogues and motivational guidance. It might also explain medical test results in simple terms or help someone think through a complex relationship question by weighing options, not giving answers.
Here’s a recap of the new features from the official announcement from OpenAI:
- Supporting you when you’re struggling. ChatGPT is trained to respond with grounded honesty. There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency. While rare, we’re continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.
- Keeping you in control of your time. Starting today, you’ll see gentle reminders during long sessions to encourage breaks. We’ll keep tuning when and how they show up so they feel natural and helpful.
- Helping you solve personal challenges. When you ask something like “Should I break up with my boyfriend?” ChatGPT shouldn’t give you an answer. It should help you think it through—asking questions, weighing pros and cons. New behavior for high-stakes personal decisions is rolling out soon.
These efforts are supported by close collaboration with medical professionals and researchers. OpenAI has consulted with over 90 doctors in more than 30 countries to create rubrics for evaluating sensitive conversations. The company is also working with human-computer interaction experts and mental health specialists to refine how ChatGPT identifies and responds to vulnerable moments. An advisory group focused on mental health and youth development is being formed to guide this ongoing work.
The changes reflect OpenAI’s stated goal of making ChatGPT useful without being addictive. Its vision is not to capture attention, but to be genuinely helpful in ways that make users want to return, whether that’s daily, weekly, or occasionally. The company says its internal benchmark is simple: if someone they care about used ChatGPT for support, would they feel reassured? Every update aims to move that answer closer to yes
