Skip to main content
The Keyword

AI

A policy agenda for responsible AI progress: Opportunity, Responsibility, Security

Illustration of a light bulb, compass and shield with connecting lines

Today Google is publishing a white paper with suggestions for a policy agenda for responsible AI progress.

As Sundar said at this month’s Google I/O, the growth of AI is as big a technology shift as we’ve seen. The advancements in today’s AI models are not just creating new ways to engage with information, find the right words, or discover new places, they’re helping people break entirely new scientific and technological ground.

We stand on the cusp of a new era, letting us reimagine the ways we can significantly improve the lives of billions of people, help businesses thrive and grow, and support society in answering our toughest questions. At the same time, we all must be clear-eyed that AI will come with risks and challenges.

Against this backdrop, we’re committed to moving forward boldly, responsibly, and in partnership with others.

Calls for a halt to technological advances are unlikely to be successful or effective, and risk missing out on AI’s substantial benefits and falling behind those who embrace its potential. Instead, we need broad-based efforts — across government, companies, universities, and more — to help translate technological breakthroughs into widespread benefits, while mitigating risks.

When I outlined the need for a Shared Agenda for Responsible AI Progress a few weeks ago, I said individual practices, shared industry standards, and sound government policies would be essential to getting AI right. Today we’re releasing a white paper with policy recommendations for AI in which we encourage governments to focus on three key areas — unlocking opportunity, promoting responsibility, and enhancing security:

1. Unlocking opportunity by maximizing AI’s economic promise

Economies that embrace AI will see significant growth, outcompeting rivals that are slower on the uptake. AI will help many different industries produce more complex and valuable products and services, and help increase productivity despite growing demographic challenges. AI also promises to give a boost to both small businesses using AI-powered products and services to innovate and grow — and to workers who can focus on non-routine and more rewarding elements of their jobs.

What it will take to get this right: To unlock the economic opportunity that AI offers, and minimize workforce disruptions, policymakers should invest in innovation and competitiveness, promote legal frameworks that support responsible AI innovation, and prepare workforces for AI-driven job transition. For example, governments should explore foundational AI research through national labs and research institutions, adopt policies that support responsible AI development (including privacy laws that protect personal information and enable trusted data flows across national borders), and promote continuing education, upskilling programs, movement of key talent across borders, and research on the evolving future of work.

2. Promoting responsibility while reducing risks of misuse

AI is already helping the world take on challenges from disease to climate change, and can be a powerful force for progress. But if not developed and deployed responsibly, AI systems could also amplify current societal issues, such as misinformation, discrimination, and misuse of tools. And without trust and confidence in AI systems, businesses and consumers will be hesitant to adopt AI, limiting their opportunity to capture AI’s benefits.

What it will take to get this right: Tackling these challenges will require a multi-stakeholder approach to governance. Learning from the experience of the internet, stakeholders will come to the table with a healthy grasp of both the potential benefits and challenges. Some challenges will require fundamental research to better understand AI’s benefits and risks, and how to manage them, and developing and deploying new technical innovations in areas like interpretability and watermarking. Others will be best addressed by developing common standards and shared best practices and proportional, risk-based regulation to ensure that AI technologies are developed and deployed responsibly. And others may require new organizations and institutions. For example, leading companies could come together to form a Global Forum on AI (GFAI), building on previous examples like the Global Internet Forum to Counter Terrorism (GIFCT). International alignment will also be essential to develop common policy approaches that reflect democratic values and avoid fragmentation.

3. Enhancing global security while preventing malicious actors from exploiting this technology

AI has important implications for global security and stability. Generative AI can help create (but also identify and track) mis- and dis-information and manipulated media. AI-based security research is driving a new generation of cyber defenses through advanced security operations and threat intelligence, while AI-generated exploits may also enable more sophisticated cyberattacks by adversaries.

What it will take to get this right: The first step is to put technical and commercial guardrails in place to prevent malicious use of AI and to work collectively to address bad actors, while maximizing the potential benefits of AI. For example, governments should explore next-generation trade control policies for specific applications of AI-powered software that are deemed security risks, and on specific entities that provide support to AI-related research and development in ways that could threaten global security. Governments, academia, civil society, and companies also need a better understanding of the implications of increasingly powerful AI systems, and how we can align sophisticated and complex AI with human values. At the end of the day, security is a team sport and progress in this space will require cooperation in the form of joint research, adoption of best-in-class data governance, public-private forums to share information on AI security vulnerabilities, and more.

Final thoughts

With a full recognition of the potential challenges, we’re confident a policy agenda centered on the key pillars of opportunity, responsibility, and security can unlock the benefits of AI and ensure that those benefits are shared by all.

As we’ve said before, AI is too important not to regulate, and too important not to regulate well. From Singapore‘s AI Verify framework to the UK’s pro-innovation approach to AI regulation to America’s National Institute of Standards & Technology’s AI Risk Management Framework, we’re encouraged to see governments around the world seriously addressing the right policy frameworks for these new technologies, and we look forward to supporting their efforts.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe