Skip to main content
The Keyword

Public Policy

7 principles for getting AI regulation right

illustration of a blue map with upward arrows and charts

We’ve long said AI is too important not to regulate, and too important not to regulate well. But with legislators from Connecticut to California proposing new legal frameworks for AI, what does that mean in practice?

Five current bills in Congress and seven AI regulatory principles lay out a path to not just mitigating risk but embracing opportunity.

Why the U.S. government's approach is working

Over the past year, the U.S. government has been taking a thoughtful approach to this question in crafting guidelines for AI developers, deployers, and users. Principled commitments have laid out a framework for the sector, while a federal Executive Order has provided detailed guidance for regulators.

And at a time of partisan division, Congress is complementing this work in a deliberate and balanced way. The House has formed a bipartisan committee led by members with expertise in computer science and AI to consider legislation, and last month the Senate’s Bipartisan AI Working Group released its “Driving U.S. Innovation in Artificial Intelligence” policy roadmap, laying out detailed policy recommendations for balancing AI’s risks and benefits.

We welcomed these efforts for three reasons:

First, the government’s approach recognizes the incredible potential of AI innovation in science, healthcare, energy, and more, while embracing a practical risk-and-benefit framework for next steps. That’s critical for America to continue to be at the forefront of AI.

Second, American leaders are seeing the enormous economic potential of AI. A recent report from McKinsey pegs that global economic impact at between $17 and $25 trillion annually by 2030. (That’s an amount comparable to the current U.S. GDP.) To seize that potential, both the White House and the Senate Working Group set out concrete actions the federal government can take today to increase access to AI tools and develop an AI-ready workforce.

And third, these efforts make clear that the private and public sectors need to come together on AI leadership. We’re in the midst of a global technology race. And like all technology races, it’s a competition that will be won not by the country that invents something first, but by the countries that deploy it best, across all sectors. This includes public and private cyberdefense and national security in the U.S., where successful AI deployment can help reverse the “defender’s dilemma.”

Google endorses the five bills mentioned in the Senate’s AI Policy Roadmap

Here are five bills we support, and we continue to be in favor of legislation covering a number of other key areas.

  • illustrated text card reading "Future of AI Innovation Act (S. 4178): Advances AI standards and evaluations by giving NIST and the AISI the authority they need to promote US AI leadership globally"
  • illustrated text card reading "AI Grand Challenges Act (S. 4236): Incentivizes innovators from across the country to try out big ideas"
  • illustrated text card reading "Small Business Technological Advancement Act (S. 2330): Creates an “AI Jumpstart” program to help small and medium-sized businesses with digital transformation and AI adoption"
  • illustrated text card reading, "Workforce DATA Act (S. 2138); House Companion bill Workforce DATA Act of 2023: Assesses and measures AI’s impacts on the US workforce to help identify best practices on AI training and skilling"
  • illustrated text card reading, "CREATE AI Act (S. 2714): Establishes the National AI Research Resource (NAIRR) and encourages systems/cyber assurance coordination among agencies"

AI is a unique tool, a new general-purpose technology. And as with the steam engine, electricity, or the internet, seizing its potential will require public and private stakeholders to collaborate to bridge the gap from AI theory to productive practice. Together, we can transition from the “wow” of AI to the “how” of AI, so that everyone, everywhere can benefit from AI’s opportunities.

Seven principles for responsible regulation

Companies in democracies have thus far led advances in AI capabilities and fundamental AI research. But we need to continue to aim high, focusing on future AI advances, because while America leads in some AI fields, we’re behind in others.

To complement scientific innovation, we’d suggest seven principles as the foundation of bold and responsible AI regulation:

  1. Support responsible innovation. The Senate’s Bipartisan AI Working Group starts its roadmap with a call for increased spending on both AI innovation and safeguards against known risks. That makes sense, because the goals are complementary. Advances in technology actually increase safety, helping us build more resilient systems. While new technology involves uncertainty, we can still incorporate good practices that build trust and don’t slow beneficial innovation.
  2. Focus on outputs. Let’s promote AI systems that generate high-quality outputs, while preventing or mitigating harms. Focusing on specific outputs lets regulators intervene in a focused way, rather than trying to manage fast-evolving computer science and deep-learning techniques. That approach grounds new rules in real issues, and helps avoid overbroad regulations that could short-circuit broadly beneficial AI advances.
  3. Strike a sound copyright balance. While fair use, copyright exceptions, and similar rules governing publicly available data unlock scientific advances and the ability to learn from prior knowledge, website owners should be able to use machine-readable tools to opt out of having content on their sites used for AI training.
  4. Plug gaps in existing laws. If something is illegal without AI, then it’s illegal with AI. We don’t need duplicative laws or reinvented wheels; we need to identify and fill gaps where existing laws don’t adequately cover AI applications.
  5. Empower existing agencies. There’s no one-size-fits-all regulation for a general-purpose technology like AI, any more than we have a Department of Engines, or one law to cover all uses of electricity. We instead need to empower agencies and make every agency an AI agency.
  6. Adopt a hub-and-spoke model. A hub-and-spoke model establishes a center of technical expertise at an agency like NIST that can advance government understanding of AI and support sectoral agencies, recognizing that issues in banking will differ from issues in pharmaceuticals or transportation.
  7. Strive for alignment. We’ve already seen dozens of frameworks and proposals to govern AI around the world, including more than 600 bills in U.S. states alone. Progressing American innovation requires intervention at points of actual harm, not blanket research inhibitors. And given the national and international scope of these scientific advances, regulation should reflect truly national approaches, aligned with international standards wherever possible.

Looking down the road

AI is driving advances from the everyday to the extraordinary. From improving tools you use every day — Google Search, Translate, Maps, Gmail, YouTube, and more — to changing the way we do science and tackle big societal challenges. Modern AI is not just a technological breakthrough, but a breakthrough in creating breakthroughs — a tool to make progress happen faster.

Think of Google DeepMind’s AlphaFold program, which has already predicted the 3D shapes of nearly all proteins known to science, and how they interact. Or using AI to forecast floods up to seven days in advance, providing life-saving alerts for 460 million people in 80 countries around the world. Or using AI to map the pathways of neurons in the human brain, revealing newly discovered structures and helping scientists understand fundamental processes such as thought, learning, and memory.

AI can drive more stunning breakthroughs like these — if we stay focused on its long-term potential.

That will take being consistent, thoughtful, and collaborative — and keeping our eyes on the benefits everyone stands to gain if we get it right.

Let’s stay in touch. Get the latest news from Google in your inbox.