Skip to main content
The Keyword

Responsible AI: Our 2024 report and ongoing work



Two years ago, we published our vision for advancing AI to serve society and propel innovation. A couple of weeks ago, we published the 2024 roundup of our tremendous progress towards that vision, from new state-of-the-art models empowering creativity to AI-enabled breakthroughs in biology, health research, and neuroscience.

Being bold on AI also means being responsible from the start. That’s why our approach to AI has been consistently grounded in understanding and accounting for its broad implications for people. We were among the first organizations to publish AI principles in 2018 and have published an annual transparency report since 2019, and we consistently review our policies, practices and frameworks, and update them when the need arises.

The 2024 Responsible AI Progress Report

Our 6th annual Responsible AI Progress Report details how we govern, map, measure and manage AI risk throughout the AI development lifecycle. The report highlights the progress we have made over the past year building out governance structures for our AI product launches.

We are investing more than ever in both AI research and products that benefit people and society, and in AI safety and efforts to identify and address potential risks.

Last year’s Report includes highlights from some of the over 300 research papers our teams have published on responsibility and safety topics, updates to our responsible AI policies, principles and frameworks, and key things we’ve learned from red teaming and evaluations that took place against safety, privacy, and security benchmarks. It also describes progress we’ve made on risk mitigation techniques across different gen AI launches — including better safety tuning and filters, security and privacy controls, the use of provenance technology in our products, and broad AI literacy education. Throughout 2024, we also supported the broader AI ecosystem through funding, tools, and standards development, as detailed in the report.

An update to our Frontier Safety Framework

As AI development progresses, new capabilities may present new risks. That’s why we introduced the first iteration of our Frontier Safety Framework last year: a set of protocols to help us stay ahead of possible risks from powerful frontier AI models. Since then, we've collaborated with experts in industry, academia and government to deepen our understanding of the risks, the empirical evaluations to test for them, and the mitigations we can apply.

We have also implemented the Framework in our Google DeepMind safety and governance processes for evaluating frontier models such as Gemini 2.0. Today we’re publishing an updated Frontier Safety Framework, which includes:

  • Recommendations for Heightened Security: helping to identify where the strongest efforts to curb exfiltration risk are needed.
  • Deployment Mitigations Procedure: focusing on preventing the misuse of critical capabilities in the systems we deploy.
  • Deceptive Alignment Risk: addressing the risk of an autonomous system deliberately undermining human control.

You can read more on the Google DeepMind blog.

Updating AI Principles

Since we first published our AI Principles in 2018, the technology has evolved rapidly. Billions of people are using AI in their everyday lives. AI has become a general-purpose technology, and a platform which countless organizations and individuals use to build applications. It has moved from a niche research topic in the lab to a technology that is becoming as pervasive as mobile phones and the internet itself; one with numerous beneficial uses for society and people around the world, supported by a vibrant AI ecosystem of developers.

Common baseline principles are an important part of this evolution. In addition to AI companies and academic institutions, we're encouraged by the progress we’ve seen on AI principles globally. The G7 and the International Organization for Standardization, as well as individual democratic nations, have all published frameworks to guide the safe development and use of AI. Increasingly, organizations and governments are able to look to these common standards as they consider how best to build, regulate, and deploy this evolving technology — our Responsible AI Progress Report, for example, is now based on the United States’ NIST Risk Management Framework. Our experience and research over recent years, along with threat intelligence, expertise, and best practices we’ve shared with other AI companies, have deepened our understanding of AI's potential and risks.

There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security.

With that backdrop, we’re updating our own AI Principles to focus on three core tenets:

  • Bold Innovation: We develop AI to assist, empower, and inspire people in almost every field of human endeavor, drive economic progress and improve lives, enable scientific breakthroughs, and help address humanity's biggest challenges.
  • Responsible Development and Deployment: Because we understand that AI, as a still-emerging transformative technology, poses new complexities and risks, we consider it an imperative to pursue AI responsibly throughout the development and deployment lifecycle — from design to testing to deployment to iteration — learning as AI advances and uses evolve.
  • Collaborative Progress, Together: We learn from others, and build technology that empowers others to harness AI positively.

You can read our full AI Principles on AI.google.

Guided by our AI Principles, we will continue to focus on AI research and applications that align with our mission, our scientific focus, and our areas of expertise, and stay consistent with widely accepted principles of international law and human rights — always evaluating specific work by carefully assessing whether the benefits substantially outweigh potential risks. We’ll also take into account whether our engagements require bespoke research and development or rely on general purpose, widely-available technology. These assessments are particularly important as AI is increasingly being developed by numerous organizations and governments for uses in fields like healthcare, science, robotics, cybersecurity, transportation, national security, energy, climate, and more.

Of course, in addition to the Principles, we continue to have specific product policies and clear terms of use that contain prohibitions like illegal use of our services.

The opportunity ahead

We recognize how quickly the underlying technology — and the debate around AI’s advancement, deployment, and uses — will continue to evolve, and we will continue to adapt and refine our approach as we all learn over time.

As we see AGI, in particular, coming into sharper focus, the societal implications become incredibly profound. This isn't just about developing powerful AI; it's about building the most transformative technology in human history, using it to solve humanity’s biggest challenges, and ensuring that the right safeguards and governance are in place, for the benefit of the world. We’ll share our progress and findings about this journey, and expect to continue to evolve our thinking, as we move closer to AGI.

As we move forward, we believe that the improvements we’ve made over the last year to our governance and other processes, our new Frontier Safety Framework, and our AI Principles position us well for the next phase of AI transformation. The opportunity of AI to assist and improve the lives of people around the world is what ultimately drives us in this work, and we will continue to pursue our bold, responsible, and collaborative approach to AI.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe