Supporting the Elections for European Parliament in 2024
This year, a number of key elections are taking place around the world. On June 6-9, 2024, voters across the 27 Member States of the European Union will take to the polls to elect Members of European Parliament (MEPs). We are committed to supporting this democratic process by surfacing high-quality information to voters, safeguarding our platforms from abuse and equipping campaigns with the best-in-class security tools and training. Across our efforts, we’ll have an increased focus on the role of artificial intelligence (AI) and the part it can play in the misinformation landscape — while also leveraging AI models to augment our abuse-fighting efforts.
Informing voters by surfacing high-quality information
In the build-up to elections, people need useful, relevant and timely information to help them navigate the electoral process. Here are some of the ways we make it easy for people to find what they need:
- Voting details on Google Search: In the coming months, when people search for topics like “how to vote,” they will find details about how they can vote — such as ID requirements, registration, voting deadlines, voting abroad and guidance for different means of voting, like in person or via mail. We’re collaborating with the European Parliament which aggregates information from Electoral Commissions and authorities in the 27 EU Member States.
- Authoritative information on YouTube: For news and information related to elections, our systems prominently surface content from authoritative sources, on the YouTube homepage, in search results and the “Up Next” panel. YouTube also displays information panels at the top of search results and below videos to provide additional context from authoritative sources. For example, YouTube may surface various election information panels above search results or on videos related to election candidates, parties or voting.
- Ongoing transparency on Election Ads: All advertisers who wish to run election ads in the EU on our platforms are required to go through a verification process and have an in-ad disclosure that clearly shows who paid for the ad. These ads are published in our Political Ads Transparency Report, where anyone can look up information such as how much was spent and where it was shown. We also limit how advertisers can target election ads.
Safeguarding our platforms and disrupting the spread of misinformation
To better secure our products and prevent abuse, we continue to enhance our enforcement systems and to invest in Trust & Safety operations — including at our Google Safety Engineering Center (GSEC) for Content Responsibility in Dublin, dedicated to online safety in Europe and around the world. We also continue to partner with the wider ecosystem to combat misinformation.
- Enforcing our policies and using AI models to fight abuse at scale: We have long-standing policies that inform how we approach areas like manipulated media, hate and harassment, and incitement to violence — along with policies around demonstrably false claims that could undermine democratic processes, for example in YouTube’s Community Guidelines and our political content policies for advertisers. To help enforce our policies, our AI models are enhancing our abuse-fighting efforts. With recent advances in our Large Language Models (LLMs), we’re building faster and more adaptable enforcement systems that enable us to remain nimble and take action even more quickly when new threats emerge.
- Working with the wider ecosystem on countering misinformation: Since our inaugural contribution of €25 million to help launch the European Media & Information Fund, an effort designed to strengthen media literacy and fight misinformation across Europe, 70 projects have been funded across 24 countries so far, covering topics ranging from fact checking during elections and critical events, to improving media literacy of populations who are typically harder to reach. We also support the Global Fact Check Fund as well as numerous civil society, research and media literacy efforts from partners, including Google.org grantee TechSoup Europe, as well as Civic Resilience Initiative, Baltic Centre for Media Excellence, CEDMO and more.
- Prebunking to preempt manipulation online: Google and Jigsaw recently announced a prebunking campaign ahead of the European Parliamentary elections. The campaign – which teaches audiences how to spot common manipulation techniques before they encounter them via short video ads on social – kicks off this spring in France, Germany, Italy, Belgium and Poland. The videos will also be translated and available in all EU languages.
Helping people navigate AI-generated content
Like any emerging technology, AI presents new opportunities as well as challenges. For example, generative AI makes it easier than ever to create new content, but it can also raise questions about trustworthiness of information, like we see with “deepfakes.” We have policies across our products and services that address mis- and disinformation in the context of AI. Here are some of the ways we help people navigate content that is AI-generated:
- Ads disclosures: We’ve expanded our political content policies to require advertisers to disclose when their election ads include synthetic content that inauthentically depicts real or realistic-looking people or events. Our ads policies already prohibit the use of manipulated media to mislead people, like deep fakes or doctored content.
- Content labels on YouTube: YouTube’s misinformation policies prohibit technically manipulated content that misleads users and could pose a serious risk of egregious harm — and over the coming months, YouTube will require creators to disclose when they’ve created realistic altered or synthetic content, and will display a label that indicates for people when the content they’re watching is synthetic.
- A responsible approach to Generative AI products: In line with our principled and responsible approach to our Generative AI products like Gemini, we’ve prioritized testing across safety risks ranging from cybersecurity vulnerabilities to misinformation and fairness. Out of an abundance of caution on such an important topic, soon we will restrict the types of election-related queries for which Gemini will return responses.
- Providing users with additional context:
- About this image in Search helps people assess the credibility and context of images found online.
- Our double-check feature in Gemini, which enables people to evaluate whether there’s content across the web to substantiate Gemini’s response, is now rolling out in countries across the EU.
- Digital watermarking and more transparency:
- SynthID, a tool from Google DeepMind, directly embeds a digital watermark into AI-generated images and audio.
- We recently joined the C2PA coalition and standard, a cross-industry effort to help provide more transparency and context for people on AI-generated content.
Equipping campaigns and candidates with best-in-class security features and training
As elections come with increased cybersecurity risks, we are working hard to help high-risk users, such as campaigns and election officials, improve their security in light of existing and emerging threats, and to educate them on how to use our products and services.
- Security tools for campaign and election teams: We offer free services like our Advanced Protection Program — our strongest set of cyber protections — and Project Shield, which provides unlimited protection against Distributed Denial of Service (DDoS) attacks. We also partner with PUBLIC, The International Foundation for Electoral Systems (IFES) and Deutschland sicher im Netz (DSIN) to scale account security training and to provide security tools including Titan Security Keys, which defend against phishing attacks and prevent bad actors from accessing your Google Account.
- Tackling coordinated influence operations: Our Threat Analysis Group (TAG) and the team at Mandiant Intelligence help identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. We report on actions taken in our quarterly TAG bulletin, and meet regularly with government officials and others in the industry to share threat information and suspected election interference. Mandiant also helps organizations build holistic election security programs and harden their defenses with comprehensive solutions, services and tools, including proactive exposure management, proactive intelligence threat hunts, cyber crisis communication services and threat intelligence tracking of information operations.
- Helpful resources at euelections.withgoogle: We’re launching an EU-specific hub at euelections.withgoogle with resources and upcoming trainings to help campaigns connect with voters and manage their security and digital presence. In advance of the European Parliamentary elections in 2019, we conducted in-person and online security training for more than 2,500 campaign and election officials, and in 2024 we aim to build on these numbers.
This all builds on work we do around elections in other countries and regions. We’re committed to working with government, industry and civil society to protect the integrity of elections in the European Union — building on our commitments made in the EU Code of Practice on Disinformation. Over the coming months, you’ll hear more from us on how we’re helping inform voters, equip campaigns and protect our platforms in the face of evolving threats, including at our Fighting Misinformation Online event in Brussels on March 21.