Skip to main content
The Keyword

Google in Europe

Our ongoing work to fight misinformation online

Illustration of boxes with scribbles representing text and a search symbol with a tick in it to signify fact checking

At Google, we aim to balance access to information with protecting users and society. For information to be helpful, it also has to be safe.

We take our responsibility seriously to provide access to trustworthy information and content by protecting users from harm, delivering reliable information, and partnering with experts and organisations to create a safer internet.

Our product, policy and enforcement decisions are guided by principles that value openness and accessibility, personal choice and the diversity of our users. We prioritize preserving the freedom of expression that the internet so powerfully affords, while curbing the spread of content that is damaging to users and society.

Protecting users from harm and abuse

We’re constantly evolving the tools, policies and techniques we’re using to find content abuse.

AI is showing tremendous promise for scaling abuse detection across our platforms.

For instance, we’ve built a prototype that leverages recent advances in Large Language Models, or LLMs, to assist in identifying content abusive at scale. LLMs are a type of artificial intelligence that can generate and understand human language.

Using LLMs, our aim is to be able to rapidly build and train a model in a matter of days — instead of weeks or months — to find specific kinds of abuse on our products. This is especially valuable for new and emerging abuse areas; we can quickly prototype a model that’s an expert in finding a specific type of abuse and automatically route it to our teams for enforcement. We're still testing these new techniques, but the prototypes have demonstrated impressive results so far and show promise for a major advance in our effort to proactively protect our users especially from new, and emerging risks.

Helping users evaluate the information they find online

Today you’ll hear about some of the steps we’re taking to reduce the threat of misinformation and to promote trustworthy information in generative AI products – ranging from launching new tools to adapting our policies, many with the shared goal of ensuring users have additional context around what they’re seeing online.

Partnering to create a safer web

We also recognize that the scale of this problem requires that we partner with others. Investing in close partnership to strengthen fact checking, media literacy and research on disinformation has been critical in the fight for quality information.

Just yesterday, I attended an event with 40 individuals representing some of the media literacy and digital responsibility organisations supported by Google.org, which connects nonprofits to funding and additional resources. Amongst them, Bibliotheques Sans Frontieres, an NGO based here in Brussels, will use their grant to support teens in asking critical questions about what they read online.

This builds upon last year’s announcement of the Global Fact Checking Fund and a $10 million commitment to fight misinformation against the war in Ukraine, including new partnerships with think tanks and civil society organisations and cash grants for fact-checking networks and non-profits.

I’m really looking forward to hearing from some of the leading voices in fact checking, content moderation, policy and journalism today, and to seeing what we can each achieve when we work together.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe