An update on our mental health work
Mental health is one of the most significant public health challenges today, impacting over one billion people around the world. For many years, Google has been committed to helping people find high-quality information and crisis support in the moments they need it most. Our work on mental health has always been rooted in research and clinical best practices. We realize that AI tools can pose new challenges, but as they improve and more people use them as part of their daily lives, we believe that responsible AI can play a positive role for people’s mental well-being.
Today, we’re sharing an update on our mental health work, including some new changes to better connect people with the right information, resources, and human support at the right time.
1. Providing better access to crisis support
We're updating Gemini to streamline the path to support for those who need it. When a conversation might signal a user may need information about mental health Gemini will surface a redesigned "Help is available" module — developed with clinical experts — to provide more effective and immediate connections to care.
When Gemini recognizes a conversation that indicates a potential crisis related to suicide or self-harm, we’re introducing a new, simplified "one-touch" interface that will provide an immediate connection to crisis hotline resources, enabling a user to chat, call, text, or visit the crisis hotline website. Within this new interface, we design responses to encourage people to seek help. Once the interface is activated, the option to reach out for professional help will remain clearly available throughout the remainder of the conversation.
2. Scaling the impact of crisis support
Today, Google.org is announcing $30 million in funding globally over the next three years to help global hotlines. This funding will help effectively scale their capacity to provide immediate and safe support for people in crisis.
We are expanding our partnership with ReflexAI to help social sector organizations scale their mental health support services. This initiative includes $4 million in direct funding and the integration of Gemini into ReflexAI’s training suite. Additionally, Google.org Fellows will provide pro bono technical expertise to help evolve Prepare, a customizable platform that uses realistic, AI-powered simulations to train staff and volunteers for critical conversations. Priority partners for this new stage include education organizations like Erika’s Lighthouse and Educators Thriving.
3. Helping Gemini respond in acute mental health situations
People are interacting with Gemini in deeper, more complex ways, looking for information across many different topics (including when they are experiencing mental health crises). Our clinical, engineering, and safety teams are focused on:
- Prioritizing safety and human connection: We want to provide practical help by connecting users with real-world resources and human support.
- Designing better responses: We design responses to encourage help-seeking while avoiding validation of harmful behaviors like urges to self-harm.
- Avoiding confirming false beliefs: We have trained Gemini not to agree with or reinforce false beliefs, and instead gently distinguish subjective experience from objective fact.
Although Gemini can be a useful tool for learning and getting information, it is not a substitute for professional clinical care, therapy, or crisis support for those who need it. This is why we’ve been training the model to help recognize when a conversation might signal that a person may be in an acute mental health situation, and respond appropriately by directing them to real-world help.
4. Protecting younger users
We also have existing specific protections for minors, designed to provide the most helpful responses and avoid harmful topics when using Gemini. For example:
- Persona protections designed to prevent Gemini from acting like a companion, including guardrails preventing it from claiming to be a human or possessing human attributes.
- Protections intended to prevent emotional dependence, avoiding language that simulates intimacy or expresses needs.
- Safeguards against encouraging bullying or other types of harassment.
Our safety efforts continue to evolve and reflect our ongoing commitment to creating a healthy and positive digital environment where young people can explore and learn with confidence.
These updates are part of our long-term commitment to help people using the best of Google’s technology and the expertise of our clinicians and safety experts. We’re encouraged by the potential of these tools to make support more accessible, compassionate, and effective.