Skip to main content
The Keyword

Safety & Security

Introducing Google’s Secure AI Framework



The potential of AI, especially generative AI, is immense. However, in the pursuit of progress within these new frontiers of innovation, there needs to be clear industry security standards for building and deploying this technology in a responsible manner. That’s why today we are excited to introduce the Secure AI Framework (SAIF), a conceptual framework for secure AI systems.

  • For a summary of SAIF, click this PDF.
  • For examples of how practitioners can implement SAIF, click this PDF.

Why we’re introducing SAIF now

SAIF is inspired by the security best practices — like reviewing, testing and controlling the supply chain — that we’ve applied to software development, while incorporating our understanding of security mega-trends and risks specific to AI systems.

A framework across the public and private sectors is essential for making sure that responsible actors safeguard the technology that supports AI advancements, so that when AI models are implemented, they’re secure-by-default. Today marks an important first step.

Over the years at Google, we’ve embraced an open and collaborative approach to cybersecurity. This includes combining frontline intelligence, expertise, and innovation with a commitment to share threat information with others to help respond to — and prevent — cyber attacks. Building on that approach, SAIF is designed to help mitigate risks specific to AI systems like stealing the model, data poisoning of the training data, injecting malicious inputs through prompt injection, and extracting confidential information in the training data. As AI capabilities become increasingly integrated into products across the world, adhering to a bold and responsible framework will be even more critical.

And with that, let’s take a look at SAIF and its six core elements:

illustration of green checks marks and mailboxes

1. Expand strong security foundations to the AI ecosystem

This includes leveraging secure-by-default infrastructure protections and expertise built over the last two decades to protect AI systems, applications and users. At the same time, develop organizational expertise to keep pace with advances in AI and start to scale and adapt infrastructure protections in the context of AI and evolving threat models. For example, injection techniques like SQL injection have existed for some time, and organizations can adapt mitigations, such as input sanitization and limiting, to help better defend against prompt injection style attacks.

illustration of a red hat and black glasses and target crosshairs

2. Extend detection and response to bring AI into an organization’s threat universe

Timeliness is critical in detecting and responding to AI-related cyber incidents, and extending threat intelligence and other capabilities to an organization improves both. For organizations, this includes monitoring inputs and outputs of generative AI systems to detect anomalies and using threat intelligence to anticipate attacks. This effort typically requires collaboration with trust and safety, threat intelligence, and counter abuse teams.

illustration of a skull and shield and sword

3. Automate defenses to keep pace with existing and new threats

The latest AI innovations can improve the scale and speed of response efforts to security incidents. Adversaries will likely use AI to scale their impact, so it is important to use AI and its current and emerging capabilities to stay nimble and cost effective in protecting against them.

illustration of a castle tower as a shield

4. Harmonize platform level controls to ensure consistent security across the organization

Consistency across control frameworks can support AI risk mitigation and scale protections across different platforms and tools to ensure that the best protections are available to all AI applications in a scalable and cost efficient manner. At Google, this includes extending secure-by-default protections to AI platforms like Vertex AI and Security AI Workbench, and building controls and protections into the software development lifecycle. Capabilities that address general use cases, like Perspective API, can help the entire organization benefit from state of the art protections.

illustration of a light bulb

5. Adapt controls to adjust mitigations and create faster feedback loops for AI deployment

Constant testing of implementations through continuous learning can ensure detection and protection capabilities address the changing threat environment. This includes techniques like reinforcement learning based on incidents and user feedback and involves steps such as updating training data sets, fine-tuning models to respond strategically to attacks and allowing the software that is used to build models to embed further security in context (e.g. detecting anomalous behavior). Organizations can also conduct regular red team exercises to improve safety assurance for AI-powered products and capabilities.

illustration of a magnifying glass

6. Contextualize AI system risks in surrounding business processes

Lastly, conducting end-to-end risk assessments related to how organizations will deploy AI can help inform decisions. This includes an assessment of the end-to-end business risk, such as data lineage, validation and operational behavior monitoring for certain types of applications. In addition, organizations should construct automated checks to validate AI performance.

Why we support a secure AI community for everyone

We’ve long advocated for, and often developed, industry frameworks to raise the security bar and reduce overall risk. We’ve collaborated with others to launch the Supply-chain Levels for Software Artifacts (SLSA) framework to improve software supply chain integrity, and our pioneering work on our BeyondCorp access model led to the zero trust principles which are industry standard today. What we learned from these and other efforts is that to succeed in the long term, you have to build a community to support and advance the work. That’s why we’re excited to announce the first steps in our journey to build a SAIF community for everyone.

How Google is putting SAIF into action

We’re already taking five steps to support and advance a framework that works for all.

  1. Fostering industry support for SAIF with the announcement of key partners and contributors in the coming months and continued industry engagement to help develop the NIST AI Risk Management Framework and ISO/IEC 42001 AI Management System Standard (the industry's first AI certification standard). These standards rely heavily on the security tenets in the NIST Cybersecurity Framework and ISO/IEC 27001 Security Management System — which Google will be participating in to ensure planned updates are applicable to emerging technology like AI — and are consistent with SAIF elements.
  2. Working directly with organizations, including customers and governments to help them understand how to assess AI security risks and mitigate them. This includes conducting workshops with practitioners and continuing to publish best practices for deploying AI systems securely.
  3. Sharing insights from Google’s leading threat intelligence teams like Mandiant and TAG on cyber activity involving AI systems. To learn more about some of the ways Google practitioners are leveraging generative AI to identify threats faster, eliminate toil, and better solve for security talent gaps, see here.
  4. Expanding our bug hunters programs (including our Vulnerability Rewards Program) to reward and incentivize research around AI safety and security.
  5. Continuing to deliver secure AI offerings with partners like GitLab and Cohesity, and further develop new capabilities to help customers build secure systems. That includes our commitment to the open source community and we will soon publish several open source tools to help put SAIF elements into practice for AI security.

As we advance SAIF, we’ll continue to share research and explore methods that help to utilize AI in a secure way. We’re committed to working with governments, industry and academia to share insights and achieve common goals to ensure that this profoundly helpful technology works for everyone, and that we as a society get it right.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe