Using AI to keep Google Search safe
Every day, people come to Google looking for ways to keep themselves and their families safe. From highlighting resources in the wake of a natural disaster to providing time-sensitive health information, we’re constantly working on new features and improvements to help you quickly find what you need. And advancements in AI can power new technologies, like flood forecasting, to help people stay out of harm’s way.
Here’s a look at how our AI systems are helping us connect people to critical information while avoiding potentially shocking or harmful content — so you can stay safe, both online and off.
Finding trustworthy, actionable information when you need it most
We know that people come to Search in the moments that matter most. Today, if you search on Google for information on suicide, sexual assault, substance abuse and domestic violence, you’ll see contact information for national hotlines alongside the most relevant and helpful results.
But people in personal crises search in all kinds of ways, and it’s not always obvious to us that they’re in need. And if we can’t accurately recognize that, we can’t code our systems to show the most helpful search results. That's why using machine learning to understand language is so important.
Now, using our latest AI model, MUM, we can automatically and more accurately detect a wider range of personal crisis searches. MUM can better understand the intent behind people’s questions to detect when a person is in need, which helps us more reliably show trustworthy and actionable information at the right time. We’ll start using MUM to make these improvements in the coming weeks.
Steering clear of unexpected shocking content
Keeping you safe on Search also means helping you steer clear of unexpected shocking results. This can be challenging, because content creators sometimes use benign terms to label explicit or suggestive content. And the most prevalent content that matches your search may not be what you intended to find. In these cases, even if people aren't directly seeking explicit content, it can show up in their results.
One way we tackle this is with SafeSearch mode, which offers users the option to filter explicit results. This setting is on by default for Google accounts for people under 18. And even when users choose to have SafeSearch off, our systems still reduce unwanted racy results for searches that aren't seeking them out. In fact, every day, our safety algorithms improve hundreds of millions of searches globally across web, image and video modes.
But there’s still room for improvement, and we’re using advanced AI technologies like BERT to better understand what you’re looking for. BERT has improved our understanding of whether searches are truly seeking out explicit content, helping us vastly reduce your chances of encountering surprising search results.
This is a complex challenge we’ve been tackling for a while — but in the last year alone, this BERT improvement has reduced unexpected shocking results by 30%. It’s been especially effective in reducing explicit content for searches related to ethnicity, sexual orientation and gender, which can disproportionately impact women and especially women of color.
Scaling our protections around the world
MUM can transfer knowledge across the 75 languages it’s trained on, which can help us scale safety protections around the world much more efficiently. When we train one MUM model to perform a task — like classifying the nature of a query — it learns to do it in all the languages it knows.
For example, we use AI to reduce unhelpful and sometimes dangerous spam pages in your search results. In the coming months, we’ll use MUM to improve the quality of our spam protections and expand to languages where we have very little training data. We'll also be able to better detect personal crisis queries all over the world, working with trusted local partners to show actionable information in several more countries.
Like any improvement to Search, these changes have and will continue to go through rigorous evaluation — with input from our search raters around the world to make sure we’re providing more relevant, helpful results. Whatever you’re searching for, we’re committed to helping you safely find it.