How Google Maps protects against fake content
Over 300 million contributors share their experiences on Google Maps each year, helping people get the latest information for more than 250 million places around the world. With over 20 million reviews, photos, business hour updates and other contributions added to Maps each day, we’re invested in making sure information is accurate and unhelpful content is removed.
We previously shared how we use both automated technologies and our expert teams to catch fake reviews before they’re ever seen. Now we’re sharing three ways we stop policy-violating content from being submitted.
Responding quickly to real-time abuse
Our systems are constantly monitoring for unusual patterns in contributed content. When we detect suspicious activity, we act quickly and may implement protections to prevent further abuse. This can include everything from taking down policy-violating content to temporarily disabling new contributions. For example, earlier this year we saw a sudden spike in 1-star reviews on a local bar in Missouri. To stop the abuse, we disabled the rating function temporarily on the place so that the bar’s rating would not be further affected. Meanwhile, we also removed policy-violating reviews and investigated the accounts that left the reviews.
Preventing abuse ahead of sensitive moments
In addition to protecting places after we see signs of abuse, we proactively protect places during times when we anticipate an uptick in off-topic and unhelpful content. For example, around election time in the U.S. polling stations tend to receive contributions unrelated to the actual experience of visiting that location. As a result, in 2020, we limited the ability for people to suggest edits to phone numbers, addresses and other factual information for places like voting sites to help avoid the spread of election-related misinformation.
Instating longer-term protections
Beyond these temporary protections, there are also longer-term protections for places where we have found user contributions to be consistently unhelpful, harmful, or off-topic. This includes places that people go to without choice or places only accessible to people stationed or assigned there — such as police stations and prisons. A set of frameworks helps us evaluate how helpful user input might be for these types of places, and based on the outcome we may apply restrictions ranging from limiting contributions to blocking a specific type of content to blocking contributed content altogether.
In these instances, we may inform users when contributions to certain places can’t be accepted. For example, if someone is looking to write a review for a prison on Google Maps, they may find a notification banner that says this functionality is turned off with a link to learn more about our policies. Even in cases where we impose restrictions, people can still see helpful information about these places, like its address, website and phone number.
In circumstances that warrant protections, our wide range of techniques help prevent bad content from being contributed. We’ll continue to evolve our framework and invest in proactive ways to keep information on Maps helpful and reliable.