Skip to main content
The Keyword

SAIF Risk Assessment: A new tool to help secure AI systems across industry

illustration of a blue shield with a grey "G" on it, with circles around it featuring illustrated icons like a key, a lock, a bug, a person, a laptop, linking to a YouTube video
10:25

Last year, we announced our Secure AI Framework (SAIF) to help others safely and responsibly deploy AI models. It not only shares our best practices, but offers a framework for the industry, frontline developers and security professionals to ensure that when AI models are implemented, they are secure by design. To drive the adoption of critical AI security measures, we used SAIF principles to help form the Coalition for Secure AI (CoSAI) with industry partners. Today, we’re sharing a new tool that can help others assess their security posture, apply these best practices and put SAIF principles into action.

The SAIF Risk Assessment, available to use today on our new website SAIF.Google, is a questionnaire-based tool that will generate an instant and tailored checklist to guide practitioners to secure their AI systems. We believe this easily accessible tool fills a critical gap to move the AI ecosystem toward a more secure future.

New SAIF Risk Assessment

The SAIF Risk Assessment helps turn SAIF from a conceptual framework into an actionable checklist for practitioners responsible for securing their AI systems. Practitioners can find the tool on the menu bar of the new SAIF.Google homepage.

image of SAIF Report website

The assessment will start with questions aimed to gather information about the submittor’s AI system security posture. Questions cover topics like training, tuning and evaluation; access controls to models and data sets; preventing attacks and adversarial inputs; secure designs and coding frameworks for generative AI; and generative AI-powered agents.

a screenshot reading "Risk Assessment Questions"

How the tool works

Once the questions have been answered, the tool will immediately provide a report highlighting specific risks to the submittor’s AI systems, as well as suggested mitigations, based on the responses they provided. These risks include things like Data Poisoning, Prompt Injection, Model Source Tampering, and more. For each risk identified by the risk assessment tool, we’ll offer the reason it was assigned and additional details, as well as explain the technical risks and the controls to mitigate them. To learn more, visitors can explore an interactive SAIF Risk Map that explains how different security risks are introduced, exploited and mitigated throughout the AI development process.

  • a chart illustrating how risks are introduced, exploited and mitigated through the AI development

    The SAIF risk map shows how different risks are introduced, exploited and mitigated throughout the AI development process.

  • screenshot of a risk report with the bold phrase "SAIF Risk Report"

    Example of immediate report compiled from a submitter’s responses to the questionnaire.

  • a screenshot with the phrases "Prompt Injection" and "Sensitive Data Disclosure"

    Example of exposed risks and recommended remediation steps.

An update on CoSAI

We’ve also been making progress with the Coalition for Secure AI (CoSAI), and with 35 industry partners we recently launched the three technical workstreams: Software Supply Chain Security for AI Systems, Preparing Defenders for a Changing Cybersecurity Landscape and AI Risk Governance. CoSAI working groups will create AI security solutions based on these initial focus areas. The SAIF Risk Assessment Report capability specifically aligns with CoSAI’s AI Risk Governance workstream, helping to create a more secure AI ecosystem across the industry.

We’re excited for practitioners to take advantage of the SAIF Risk Assessment and apply the SAIF principles to secure their AI systems. Visit SAIF.Google for all the latest updates on our AI security work.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe