Skip to main content
India Blog

Our approach to protecting users from the risks of AI generated media

AI

For more than two decades, Google has worked with machine learning and AI to make our products more helpful. In India, AI has allowed us to enable language translations at scale, do precise flood forecasts and foster improved agricultural productivity.

AI will be the biggest technological shift in our lifetime. It will create immense opportunities, and transform every walk of life and we’re excited to see the Indian government’s vision to use this technology to benefit its people through its efforts to bridge linguistic divides, transform agriculture, enhance citizen and health services, and empower individuals through skill development and more.

We’re pleased to have the opportunity to partner with the government on some of these programs, and to continue dialogue, including through our upcoming engagement on the Global Partnership on Artificial Intelligence (GPAI) Summit. As we continue to incorporate AI, and more recently, generative AI, into more Google experiences, we know it’s imperative to be bold and responsible together.

As with any transformative technology, there will be challenges that we need to address. Advancing AI responsibly means striking a balance between maximizing its positive impact and addressing its potential risks. While this may seem like a delicate dance, it is essential to embrace this tension in order to achieve long-term success. Only by prioritizing responsibility from the outset can we truly harness the transformative power of AI without compromising societal well-being.

An example of how we’re approaching this is anticipating and testing for a wide range of safety and security risks, including the rise of new forms of AI-generated, photo-realistic, synthetic audio or video content known as “synthetic media”. While this technology has useful applications - for instance, by opening new possibilities to those affected by speech or reading impairments, or new creative grounds for artists and movie studios around the world - it raises concerns when used in disinformation campaigns and for other malicious purposes, through deep fakes. The potential for spreading false narratives and manipulated content can have negative implications.

Providing additional context for generative AI outputs

We are looking to help address these potential risks in multiple ways. One important consideration is helping users identify AI-generated content and empowering people with knowledge of when they’re interacting with AI generated media. This is why we've added “About this result” to generative AI in Google Search to help people evaluate the information they find in the experience. We also introduced new ways to help people double check the responses they see in Google Bard by grounding it in Google Search.

Equally, context is important with images, and we’re committed to finding ways to make sure every image generated through our products has metadata labeling and embedded watermarking with SynthID, currently being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic images. We’re also making progress on tools to detect synthetic audio — in our AudioLM work, we trained a classifier that can detect synthetic audio in our own AudioLM model with nearly 99% accuracy.

In the coming months, YouTube will require creators to disclose altered or synthetic content that is realistic, including using AI tools, and we'll inform viewers about such content through labels in the description panel and video player. We're committed to working with creators before this rolls out to make sure they understand the new requirements.

Implementing guardrails and safeguards to address AI misuse

We’ve heard continuous feedback from creators, viewers, and artists, about the ways in which emerging technologies could impact them. This is especially true in cases where someone’s face or voice could be digitally generated without their permission or to misrepresent their points of view. In the coming months, on YouTube, we’ll make it possible to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, using our privacy request process.

We have a prohibited use policy for new AI releases outlining the harmful, inappropriate, misleading or illegal content we do not allow, based on early identification of harms during the research, development, and ethics review process for our products This principle is applied across our product policies to address generative AI content.

We’re also thinking of how this issue may affect critical moments such as elections. Which is why we recently updated our election advertising policies to require advertisers to disclose when their election ads include material that’s been digitally altered or generated. This will help provide additional context to people seeing election advertising on our platforms.

And of course, we have long-standing policies, across our products and services, that are applicable to content created by generative AI. For instance, as part of our misrepresentation policy for Google Ads, we prohibit the use of manipulated media, deep fakes and other forms of doctored content meant to deceive, defraud, or mislead users. Our policies for Search features like Knowledge Panels or Featured Snippets, prohibit audio, video, or image content that's been manipulated to deceive, defraud, or mislead. And on Google Play, apps that generate content using AI have always had to comply with all Google Play Developer Policies – this includes prohibiting and preventing the generation of restricted content and content that enables deceptive behavior.

Combating deep fakes and AI-generated misinformation

There is no silver bullet to combat deep fakes and AI-generated misinformation. It requires a collaborative effort, one that involves open communication, rigorous risk assessment, and proactive mitigation strategies. For example, on YouTube, we use a combination of people and machine learning technologies to enforce our Community Guidelines, with reviewers across Google operating around the world. In our systems, AI classifiers help detect potentially violative content at scale, and reviewers work to confirm whether content has actually crossed policy lines. AI is helping to continuously increase both the speed and accuracy of our content moderation systems.

We also actively engage with policymakers, researchers, and experts to develop effective solutions. We have invested USD $1M in grants to the Indian Institute of Technology, Madras, to establish the first of its kind multidisciplinary center for Responsible AI. This center will foster collective effort — involving not just researchers, but domain experts, developers, community members, policy makers and more – in getting AI right, and localizing it to the Indian context.

Our collaboration with the Indian government for a multi-stakeholder discussion aligns with our commitment to addressing this challenge together and ensuring a responsible approach to AI. By embracing a multistakeholder approach and fostering responsible AI development, we can ensure that AI's transformative potential continues to serve as a force for good in the world.

The lessons of history, from automobiles to the Y2K scare, have demonstrated that while technological advancements carry inherent uncertainties, they often hold the potential to unlock transformative benefits for society. As we venture into the newer territories of AI innovation, it is crucial to strike a balance between mitigating potential risks and initiatives to seize the opportunities AI creates.