Skip to main content
The Keyword

Search

Our latest quality improvements for Search

Article's hero media

Search can always be improved. We knew it when I started working on Search in 1999, and it’s still true today. Back then, the Internet was expanding at an incredible rate. We had to make sense of this explosion of information, organize it, and present it in a way so that people could find what they were looking for, right on the Google results page. The work then was around PageRank, the core algorithm used to measure the importance of webpages so they could be ranked in results. In addition to trying to organize information, our algorithms have always had to grapple with individuals or systems seeking to “game” our systems in order to appear higher in search results—using low-quality “content farms,” hidden text and other deceptive practices. We've tackled these problems, and others over the years, by making regular updates to our algorithms and introducing other features that prevent people from gaming the system.

Today, in a world where tens of thousands of pages are coming online every minute of every day, there are new ways that people try to game the system. The most high profile of these issues is the phenomenon of “fake news,” where content on the web has contributed to the spread of blatantly misleading, low quality, offensive or downright false information. While this problem is different from issues in the past, our goal remains the same—to provide people with access to relevant information from the most reliable sources available. And while we may not always get it right, we’re making good progress in tackling the problem. But in order to have long-term and impactful changes, more structural changes in Search are needed.

With that longer-term effort in mind, today we’re taking the next step toward continuing to surface more high-quality content from the web. This includes improvements in Search ranking, easier ways for people to provide direct feedback, and greater transparency around how Search works.

Search ranking

Our algorithms help identify reliable sources from the hundreds of billions of pages in our index. However, it’s become very apparent that a small set of queries in our daily traffic (around 0.25 percent), have been returning offensive or clearly misleading content, which is not what people are looking for. To help prevent the spread of such content for this subset of queries, we’ve improved our evaluation methods and made algorithmic updates to surface more authoritative content.

  • New Search Quality Rater guidelines: Developing changes to Search involves a process of experimentation. As part of that process, we have evaluators—real people who assess the quality of Google’s search results—give us feedback on our experiments. These ratings don’t determine individual page rankings, but are used to help us gather data on the quality of our results and identify areas where we need to improve. Last month, we updated our Search Quality Rater Guidelines to provide more detailed examples of low-quality webpages for raters to appropriately flag, which can include misleading information, unexpected offensive results, hoaxes and unsupported conspiracy theories. These guidelines will begin to help our algorithms in demoting such low-quality content and help us to make additional improvements over time.
  • Ranking changes: We combine hundreds of signals to determine which results we show for a given query—from the freshness of the content, to the number of times your search queries appear on the page. We’ve adjusted our signals to help surface more authoritative pages and demote low-quality content, so that issues similar to the Holocaust denial results that we saw back in December are less likely to appear.

Direct feedback tools

When you visit Google, we aim to speed up your experience with features like Autocomplete, which helps predict the searches you might be typing to quickly get to the info you need, and Featured Snippets, which shows a highlight of the information relevant to what you’re looking for at the top of your search results. The content that appears in these features is generated algorithmically and is a reflection of what people are searching for and what’s available on the web. This can sometimes lead to results that are unexpected, inaccurate or offensive. Starting today, we’re making it much easier for people to directly flag content that appears in both Autocomplete predictions and Featured Snippets. These new feedback mechanisms include clearly labeled categories so you can inform us directly if you find sensitive or unhelpful content. We plan to use this feedback to help improve our algorithms.
ac
New feedback link for Autocomplete
fs
Updated feedback link for Featured Snippets

Greater transparency about our products

Over the last few months, we’ve been asked tough questions about why shocking or offensive predictions were appearing in Autocomplete. Based on this, we evaluated where we can improve our content policies and updated them appropriately. Now we’re publishing this policy to the Help Center so anyone can learn more about Autocomplete and our approach to removals.

For those looking to delve a little deeper, we recently updated our How Search Works site to provide more information to users and website owners about the technology behind Search. The site includes a description of how Google ranking systems sort through hundreds of billions of pages to return your results, as well as an overview of our user testing process.  

There are trillions of searches on Google every year. In fact, 15 percent of searches we see every day are new—which means there’s always more work for us to do to present people with the best answers to their queries from a wide variety of legitimate sources. While our search results will never be perfect, we’re as committed as always to preserving your trust and to ensuring our products continue to be useful for everyone.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe