Skip to main content
The Keyword

Safety & Security

Acting on our commitment to safe and secure AI

a blue shield with a G on it surrounded by icons like folders and magnifying glasses

Cyberthreats evolve quickly and some of the biggest vulnerabilities aren’t discovered by companies or product manufacturers — but by outside security researchers. That’s why we have a long history of supporting collective security through our Vulnerability Rewards Program (VRP), Project Zero and in the field of Open Source software security. It’s also why we joined other leading AI companies at the White House earlier this year to commit to advancing the discovery of vulnerabilities in AI systems.

Today, we’re expanding our VRP to reward for attack scenarios specific to generative AI. We believe this will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone. We’re also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable.

New technology requires new vulnerability reporting guidelines

As part of expanding VRP for AI, we’re taking a fresh look at how bugs should be categorized and reported. Generative AI raises new and different concerns than traditional digital security, such as the potential for unfair bias, model manipulation or misinterpretations of data (hallucinations). As we continue to integrate generative AI into more products and features, our Trust and Safety teams are leveraging decades of experience and taking a comprehensive approach to better anticipate and test for these potential risks. But we understand that outside security researchers can help us find, and address, novel vulnerabilities that will in turn make our generative AI products even safer and more secure. In August, we joined the White House and industry peers to enable thousands of third-party security researchers to find potential issues at DEF CON’s largest-ever public Generative AI Red Team event. Now, since we are expanding the bug bounty program and releasing additional guidelines for what we’d like security researchers to hunt, we’re sharing those guidelines so that anyone can see what’s “in scope.” We expect this will spur security researchers to submit more bugs and accelerate the goal of a safer and more secure generative AI.

Two new ways to strengthen the AI Supply Chain

We introduced our Secure AI Framework (SAIF) — to support the industry in creating trustworthy applications — and have encouraged implementation through AI red teaming. The first principle of SAIF is to ensure that the AI ecosystem has strong security foundations, and that means securing the critical supply chain components that enable machine learning (ML) against threats like model tampering, data poisoning, and the production of harmful content.

Today, to further protect against machine learning supply chain attacks, we’re expanding our open source security work and building upon our prior collaboration with the Open Source Security Foundation. The Google Open Source Security Team (GOSST) is leveraging SLSA and Sigstore to protect the overall integrity of AI supply chains. SLSA involves a set of standards and controls to improve resiliency in supply chains, while Sigstore helps verify that software in the supply chain is what it claims to be. To get started, today we announced the availability of the first prototypes for model signing with Sigstore and attestation verification with SLSA.

These are early steps toward ensuring the safe and secure development of generative AI — and we know the work is just getting started. Our hope is that by incentivizing more security research while applying supply chain security to AI, we’ll spark even more collaboration with the open source security community and others in industry, and ultimately help make AI safer for everyone.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe