Google's year in review: 8 areas with research breakthroughs in 2025
2025 has been a year of extraordinary progress in research. With artificial intelligence, we can see its trajectory shifting from a tool to a utility: from something people use to something they can put to work. If 2024 was about laying the multimodal foundations for this era, 2025 was the year AI began to really think, act and explore the world alongside us. With quantum computing, we made progress towards real-world applications. And across the board, we helped turn research into reality, with more capable and useful products and tools making a positive impact on people's lives today.
Here’s a look back at some of the breakthroughs, products and scientific milestones that defined the work of Google, Google DeepMind and Google Research in a year of relentless progress.
Delivering breakthroughs on world-class models
This year, we significantly advanced our model capabilities with breakthroughs on reasoning, multimodal understanding, model efficiency, and generative capabilities, beginning with the release of Gemini 2.5 in March and culminating in the November launch of Gemini 3 and the December launch of Gemini 3 Flash.
Built on a foundation of state-of-the-art reasoning, Gemini 3 Pro is our most powerful model to date, designed to help you bring any idea to life. It topped the LMArena Leaderboard and redefined multimodal reasoning with breakthrough scores on benchmarks like Humanity’s Last Exam — a fiendishly hard test for AI models to see if AI can truly think and reason like humans — and GPQA Diamond. It also set a new standard for frontier models in mathematics, achieving a new state-of-the-art of 23.4% on MathArena Apex. We followed shortly with Gemini 3 Flash, which combines Gemini 3's Pro-grade reasoning with Flash-level latency, efficiency and cost, making it the most performant model for its size. Gemini 3 Flash's quality surpasses our previous Gemini 2.5 Pro-scale model's capabilities at a fraction of the price and substantially better latency, continuing our Gemini-era trend of 'the next generation's Flash model is better than the previous generation's Pro model'.
Learn more about our progress on our world-class AI models this year:
- Gemini 3 Flash: frontier intelligence built for speed (Dec 2025)
- A new era of intelligence with Gemini 3 (Nov 2025)
- Introducing Nano Banana Pro (Nov 2025)
- Introducing Veo 3.1 and new creative capabilities in the Gemini API (Nov 2025)
- Gemini 2.5: Our most intelligent AI model (March 2025)
Gemini 3 Flash price & benchmark table.
We’re committed to making useful AI technology accessible, with state-of-the-art open models. We built our Gemma family of models to be lightweight and open for public use; this year we were able to introduce multimodal capabilities, significantly increase the context window, expand multilingual capabilities, and improve efficiency and performance.
Learn more about this year’s advances in Gemma models:
Innovating and transforming our products with AI
Throughout 2025, we continued to advance the trajectory of AI from tool to utility, transforming our portfolio of products with new, powerful agentic capabilities. We reimagined software development by moving beyond tools that assist coding to introducing powerful, agentic systems that collaborate with developers. Key advances, such as the impressive coding capabilities in Gemini 3 and the launch of Google Antigravity, mark a new era in AI-assisted software development.
Learn more about this year’s advances building developer tools:
This evolution was also clear across our core products, from AI-enabled features on the Pixel 10 and updates to AI Mode in Search, to AI-first innovations like the Gemini app and NotebookLM, which gained advanced features like Deep Research.
Learn more about how we’ve transformed our products with AI:
Empowering creativity and co-creating with AI
2025 was a transformative year for generative media, giving people new and unprecedented capabilities to realize their creative ambitions. Generative media models and tools for video, images, audio and worlds became more effective and broadly used, with breakouts Nano Banana and Nano Banana Pro offering unprecedented capabilities for native image generation and editing. We worked with people in creative industries to develop tools like Flow and Music AI Sandbox, making them more helpful for creative workflows, and we expanded creative possibilities for people with new, AI-powered experiences in the Google Arts & Culture lab, major upgrades to image editing within the Gemini app, and the introduction of powerful new generative media models like Veo 3.1, Imagen 4 and Flow.
Learn more about how we’re building AI to enhance creativity:
- Art, science, travel: 3 new AI-powered experiences this holiday season (Nov 2025)
- Introducing Veo 3.1 and advanced capabilities in Flow (Oct 2025)
- Nano Banana: Image editing in Gemini just got a major upgrade (Aug 2025)
- Veo 3, Imagen 4, and Flow: Fuel your creativity with new generative media models and tools (May 2025)
- Music AI Sandbox, now with new features and broader access (April 2025)
As research breakthroughs continue to expand AI’s capabilities, Google Labs is where we share AI experiments as we develop them – hearing from users and evolving as we learn. Some of this year’s most engaging experiments from Labs: Pomelli, an AI experiment for on-brand marketing content; Stitch, which introduced a way to turn prompt and image inputs into complex UI designs and frontend code in minutes; Jules, an asynchronous coding agent that acts as a collaborative partner for developers; and Google Beam, a 3D video communications platform that used AI to advance the possibilities of remote presence.
Learn more about how we’re experimenting in Labs:
Advancing science and mathematics
2025 was also a banner year for scientific advances with AI, marked by breakthroughs in life sciences, health, natural sciences, and mathematics.
In the space of a year, we made progress in building AI resources and tools that empower researchers and help them understand, identify, and develop treatments in healthcare. In genomics, where we’ve been applying advanced technology to research for 10 years, we moved beyond sequencing, using AI to interpret the most complex data. We also marked the 5-year anniversary of AlphaFold, the Nobel-winning AI system that solved the 50-year-old protein folding problem. AlphaFold has been used by over 3 million researchers in more than 190 countries, including over 1 million users in low- and middle-income countries.
Learn more about how we’re using AI to advance life sciences and health:
- AlphaFold: Five years of impact (Nov 2025)
- Using AI to identify genetic variants in tumors with DeepSomatic (Oct 2025)
- AI as a research partner: Advancing theoretical computer science with AlphaEvolve (Sept 2025)
- AlphaGenome: AI for better understanding the genome (June 2025)
- Accelerating scientific breakthroughs with an AI co-scientist (Feb 2025)
Gemini’s advanced thinking capabilities, including Deep Think, also enabled historic progress in mathematics and coding. Deep Think was able to solve problems that require deep abstract reasoning – achieving gold medal-standard in two international contests.
Learn more about how we’re advancing natural sciences and mathematics:
Shaping innovations in computing and the physical world
We’re also leading major discoveries and shaping the future of science in areas like quantum computing, energy and moonshots. Research in this area drew new levels of public attention, with progress towards real-world applications of quantum computing as demonstrated by Quantum Echoes and, notably, Googler Michel Devoret becoming a 2025 Physics Nobel Laureate along with former Googler John Martinis and UC Berkeley’s John Clarke, for their foundational 1980s quantum research.
Learn more about our work on space infrastructure and quantum computing:
- Project Suncatcher: Exploring a space-based, scalable AI infrastructure system design (Nov 2025)
- Googler Michel Devoret awarded the Nobel Prize in Physics (Oct 2025)
- Our Quantum Echoes algorithm is a big step toward real-world applications for quantum computing (Oct 2025)
In 2025, we continued to advance the core infrastructure that powers our AI, focusing on breakthroughs in hardware design and improving energy efficiency. This included the introduction of Ironwood, a new TPU built for the age of inference, which was designed using a method called AlphaChip, alongside a commitment to measuring the environmental impact of our technology.
Learn more about how we’re using AI to develop chips, infrastructure and improve energy efficiency:
Our work in robotics and visual understanding brought AI agents into both the physical and virtual worlds, with advancements like the foundational Gemini Robotics models, the more sophisticated Gemini Robotics 1.5, and the introduction of Genie 3 as a new frontier for general-purpose world models.
Learn more about our work with world models and robotics:
Tackling global challenges and opportunities at scale
Our work throughout 2025 demonstrates how AI-enabled scientific progress is being directly applied to address the world's most critical and pervasive challenges. By leveraging state-of-the-art foundational models and agentic reasoning, we are significantly increasing our understanding of the planet and its systems, while also delivering impactful solutions in areas vital to human flourishing, including climate resilience, public health and education.
For example, we are using state-of-the-art foundational models and agentic reasoning to help increase our understanding of the planet, helping enable work that is making a difference in people’s lives now from weather predictions to urban planning to public health. For example, our flood forecasting information now covers more than two billion people in 150 countries for severe riverine floods. And our most advanced and efficient forecasting model, WeatherNext 2 can generate forecasts 8x faster and with resolution up to 1-hour. Using this technology, we’ve supported weather agencies in making decisions based on a range of scenarios through our experimental cyclone predictions.
Learn more about our work in weather, mapping and wildfires:
- WeatherNext 2: Our most advanced weather forecasting model (Nov 2025)
- New updates and more access to Google Earth AI (Oct 2025)
- Google Earth AI: Our state-of-the-art geospatial AI models (July 2025)
- AlphaEarth Foundations helps map our planet in unprecedented detail (July 2025)
- How we're supporting better tropical cyclone prediction with AI (June 2025)
- Inside the launch of FireSat, a system to find wildfires earlier (March 2025)
We are working with partners to apply AI-enabled scientific progress closer to patients, opening up new avenues for disease management and therapeutic discovery.
Learn more about our health-related work:
- Cell2Sentence-Scale 27B: How a Gemma model helped discover a new potential cancer therapy pathway (Oct 2025)
- From diagnosis to treatment: Advancing AMIE for longitudinal disease management (March 2025)
AI is proving to be a powerful tool in education, enabling new forms of understanding and expanding curiosity through initiatives like LearnLM and Guided Learning in Gemini. We brought Gemini’s most powerful translation capabilities to Google Translate, enabling much smarter, more natural and accurate translations and piloting new speech to speech translation capabilities.
Learn more about how we’re using AI to enable learning:
Prioritizing responsibility and safety
We couple our research breakthroughs with rigorous and forward-looking work on responsibility and safety. As our models grow more capable, we’re continuing to advance and evolve our tools, resources and safety frameworks to anticipate and mitigate risk. Gemini 3 demonstrated this approach in action: it's our most secure model yet, and has undergone the most comprehensive set of safety evaluations of any Google AI model to date. And we’re looking further ahead, exploring a responsible path to AGI, prioritizing readiness, proactive risk assessment, and collaboration with the wider AI community.
Learn more about our responsibility and safety work:
- You can now verify Google AI-generated videos in the Gemini app (Dec 2025)
- How we’re bringing AI image verification to the Gemini app (Nov 2025)
- Strengthening our Frontier Safety Framework (September 2025)
- Taking a responsible path to AGI (April 2025)
- Evaluating potential cybersecurity threats of advanced AI (April 2025)
Leading frontier collaborations with industry, academia and civil society
Advancing the frontier of AI responsibly demands collaboration across all parts of society. In 2025, we worked with leading AI labs to help to form the Agentic AI Foundation and support open standards to ensure a responsible and interoperable future for agentic AI. In education, we’ve partnered with school districts like Miami Dade County and education groups like Raspberry Pi to equip students with AI skills. Our research partnerships with universities like UC Berkeley, Yale, the University of Chicago and many more have been instrumental to some of this year’s most exciting frontier research, and we’re working with the US Department of Energy’s 17 national laboratories to transform how scientific research is conducted. And we’re working with filmmakers and other creative visionaries to put the best AI tools in their hands and explore storytelling in the age of AI.
Learn more about our work on frontier collaboration:
- Google DeepMind supports U.S. Department of Energy on Genesis: a national mission to accelerate innovation and scientific discovery (Dec 2025)
- Formation of the Agentic AI Foundation (AAIF), Anchored by New Project Contributions Including Model Context Protocol (MCP), goose and AGENTS.md (Dec 2025)
- Announcing Model Context Protocol (MCP) support for Google services (Dec 2025)
- Our latest commitments in AI and learning (Nov 2025)
- Partnering to power Miami’s AI-ready future (Oct 2025)
- AI on Screen premiere: “Sweetwater” short film explores new AI narratives (Sept 2025)
- Behind “ANCESTRA”: combining Veo with live-action filmmaking (Jun 2025)
- How Indian music legend Shankar Mahadevan experiments with Music AI Sandbox (April 2025)
Looking ahead
As we look towards 2026, we’re looking forward to continuing to advance the frontier, safely and responsibly, for the benefit of humanity.