Skip to main content
The Keyword

Health

4 ways we think about health equity and AI

Illustration of a diverse group of people set against a blue background filled with medical and health icons.

I became a physician because I knew that healthcare should be better than what my family experienced as we fought for better quality care for my father when I was growing up. That experience made me work to make sure that everyone could access care that is provided with dignity and respect.

As Google’s Chief Health Equity Officer, I see firsthand how AI technologies have the potential to identify and address existing biases in healthcare and advance equity in health. But if it's not done responsibly, these innovations also have the potential to exacerbate inequities. To make sure that doesn’t happen, we’ve identified four ways we think about embedding health equity into our work and pushing AI forward in a bold and responsible way to help people live a healthier life.

Taking foundational approaches to equity research

To reflect the experiences of historically marginalized people and communities, we first integrate foundational health equity approaches — like Community-based Participatory Research (CBPR) — into our design and evaluation methods. It is equally important to understand the social context of our users like the cultural, historical and economic circumstances to help us build solutions that work better for everyone. One example of bringing our years of experience building more equitable AI models across products is the work we have done using AI to see more skin tones, which has helped us create camera features that work for everyone. Getting it right takes intention, but getting it wrong can easily result in propagating unfair biases.

Prioritizing diverse representation in data

Historically, there has been a lack of diversity in clinical trial research which excludes groups of historically marginalized people from an important step in medicine when it comes to finding new ways to prevent, detect or treat disease. That is why we strive to make our data collection and curation process inclusive and equitable, and think deeply about the model development and evaluation process where we consider what data goes into a large language model and how to evaluate its performance. The issue today is that there is no standard when it comes to diverse representation in data which is why we are partnering with the broader AI research community to identify best practices. One way we are working to understand and better treat disease is through genomic sequencing but the map we have been using for decades is a single genome sequence and does not represent the diversity of humanity. Today, we are working with the National Institutes of Health (NIH) and others on the Pangenome project to expand our view of the code that makes us all uniquely human and different. The first Pangenome release includes 47 people of diverse ancestries, and we’re working with NIH towards a goal next year of 100 people with as high-quality sequences as possible.

Considering health equity in real-world use cases

The historical use of incomplete and biased data can exacerbate the risk of harm and bias among historically marginalized populations. To correct this, we need to carefully consider how an AI system will be used in practice. Grounding the evaluation of large language models (LLMs) in specific real-world use cases that can be used to reflect the experiences of marginalized populations is an important element in reducing these issues and hopefully increasing equity. Across Google, we’ve been working to improve fairness, reduce risk of bias and drive towards equity as we continue to enhance our model performance. Some of this work, highlighted in a Nature article, describes how we are applying these approaches for our Med-PaLM LLM in the medical domain.

Fostering inclusive collaboration

Where a person lives, works, or goes to school can affect their health. In order to create useful generative AI models, we need to be able to recognize and understand these social drivers. To do so depends on our collaboration with experts across different areas — like social and behavioral science, policy or education. Partnering with Google’s Responsible AI Team and their Equitable AI Research Roundtable (EARR) Program, we are able to take a multidisciplinary approach to understanding the impacts of AI on historically marginalized communities and apply those insights to our work.

Our work at the intersection of AI and health equity is an ongoing journey that we recognize requires responsibility and accountability. We must intentionally center those efforts around marginalized populations to build solutions that make healthcare more equitable and address historical biases. This work takes time and intention. Our aim is not to move fast, but to get it right; the alternative is not an option.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe