Skip to main content
The Keyword

Building on our commitment to delivering responsible AI

An image of several photos, including a wolf, a person standing on top of a giant chess board, and another person on a mobile device.

We believe the transformative power of AI can improve lives and make the world a better place, if approached boldly and responsibly. Today, our research and products are already helping people everywhere — from their everyday tasks to their most ambitious, productive and creative endeavors.

To us, building AI responsibly means both addressing its risks and maximizing the benefits for people and society. In this, we’re guided by our AI Principles, our own research, and feedback from experts, product users and partners.

Building AI to benefit society with LearnLM

Every day, billions of people use Google products to learn. Generative AI is unlocking new ways for us to make learning more personal, helpful and accessible.

Today we’re announcing LearnLM, a new family of models, based on Gemini, and fine-tuned for learning. LearnLM integrates research-backed learning science and academic principles into our products, like helping manage cognitive load and adapting to learners' goals, needs and motivations. The result is a learning experience that is more tailored to the needs of each individual.

LearnLM is powering a range of features across our products, including Gemini, Search, YouTube and Google Classroom. In the Gemini app, the new Learning coach Gem will offer step-by-step study guidance, designed to build understanding rather than just give you the answer. On the YouTube app on Android, you can engage with educational videos in new ways, like asking a follow-up question or checking your knowledge with a quiz. Thanks to the long-context capability of the Gemini model, these YouTube features even work on long lectures and seminars.

Collaboration remains essential to unlocking the full potential of generative AI for the broader education community. We're partnering with leading educational organizations like Columbia Teachers College, Arizona State University, NYU Tisch and Khan Academy to test and improve our models, so we can extend LearnLM beyond our own products (if you’d be interested in partnering, you can sign up at this link). We’ve also partnered with MIT RAISE to develop an online course to provide educators with tools to effectively use generative AI in the classroom and beyond.

Using long-context to make knowledge accessible

One new experimental tool we’ve built to make knowledge more accessible and digestible is called Illuminate. It uses Gemini 1.5 Pro’s long context capabilities to transform complex research papers into short audio dialogues.

In minutes, Illuminate can generate a conversation consisting of two AI-generated voices, providing an overview and brief discussion of the key insights from research papers. If you want to dive deeper, you can ask follow-up questions. All audio output is watermarked with SynthID, and the original papers are referenced, so you can easily explore the source material yourself and in more detail. You can sign up to try it today at labs.google.

a YouTube video linking to Illuminate
10:25

Improving our models and protecting against misuse

While these breakthroughs are helping us deliver on our mission in new ways, generative AI is still an emerging technology and there are risks and questions that will arise as the technology advances and its uses evolve.

That’s why we believe it’s imperative to take a responsible approach to AI, guided by our AI Principles. We regularly share updates on what we’ve learned by putting them into practice.

Today, we’d like to share a few new ways that we’re improving our models like Gemini and protecting against their misuse.

  • AI-Assisted Red Teaming and expert feedback. To improve our models, we combine cutting-edge research with human expertise. This year, we’re taking red teaming — a proven practice where we proactively test our own systems for weakness and try to break them — and enhancing it through a new research technique we’re calling “AI-Assisted Red Teaming.” This draws on Google DeepMind's gaming breakthroughs like AlphaGo where we train AI agents to compete against each other to expand the scope of their red teaming capabilities. We are developing AI models with these capabilities to help address adversarial prompting and limit problematic outputs. We also improve our models with feedback from thousands of internal safety specialists and independent experts from sectors for academia to civil society. Combining this human insight with our safety testing methods will help make our models and products more accurate and reliable. This is a particularly important area of research for us, as new technical advances evolve how we interact with AI.
  • SynthID for text and video. As the outputs from our models become more realistic, we must also consider how they could be misused. Last year, we introduced SynthID, a technology that adds imperceptible watermarks to AI-generated images and audio so they’re easier to identify, and to protect against misuse. Today, we’re expanding SynthID to two new modalities: text and video. This is part of our broader investment in helping people understand the provenance of digital content.
  • Collaborating on safeguards. We’re committed to working with the ecosystem to help others benefit from and improve on the advances we’re making. In the coming months, we will open-source SynthID text watermarking through our updated Responsible Generative AI Toolkit. We are also a member of the Coalition for Content Provenance and Authenticity (C2PA), collaborating with Adobe, Microsoft, startups and many others to build and implement a standard that improves transparency of digital media.

Helping to solve real world problems

Today, our AI advances are already helping people solve real-world problems and advance scientific breakthroughs. Just in the past few weeks, we’ve announced three exciting scientific milestones:

  • In collaboration with Harvard University, Google Research published the largest synaptic-resolution 3D reconstruction of the human cortex — progress that may change our understanding of how our brains work.
  • We announced AlphaFold 3, an update to our revolutionary model that can now predict the structure and interactions of DNA, RNA and ligands in addition to proteins — helping transform our understanding of the biological world and drug discovery.
  • We introduced Med-Gemini, a family of research models that builds upon the Gemini model’s capabilities in advanced reasoning, multimodal understanding and long-context processing — to potentially assist clinicians with administrative tasks like report generation, analyze different types of medical data and help with risk prediction

This is in addition to the societally-beneficial results we’ve seen from work like flood forecasting and UN Data Commons for the SDGs. In fact, we recently published a report highlighting all the ways AI can help advance our progress on the world’s shared Sustainable Development Goals. This is just the beginning — we’re excited about what lies ahead and what we can accomplish working together.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe