The latest AI news we announced in March
For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news across products, research and more.
Here’s a look back at just some of our AI announcements from March.

March was all about expanding AI across our products to make them even more helpful. The Gemini app was upgraded to include new features like personalization, Canvas and Audio Overviews, and we made our Deep Research and Gems features available at no cost. We also added our speedy Gemini 2.0 Flash Thinking experimental model under the hood — and one-upped ourselves just a few days later by making our most intelligent AI model, Gemini 2.5 Pro (experimental), available to all Gemini users.
Gemini 2.5 Pro is state-of-the-art on a wide range of benchmarks — try it out now in Gemini, or in Google AI Studio. Read on for more ways we've been making our leading AI even more helpful for Pixel users, online shoppers, roboticists, developers, wildfire authorities and beyond.

We expanded access to AI Overviews and introduced AI Mode. AI Overviews are one of our most popular Search features and are now used by more than a billion people. Starting in the U.S., we launched Gemini 2.0 for AI Overviews to help with harder questions, beginning with coding, advanced math and multimodal queries to provide faster and higher quality responses. We also announced the new AI Mode experiment in Google Search to help people get AI-powered responses and dig deeper with follow-up questions and links to helpful web content.
We launched personalization in Gemini to make its responses more relevant to you. Gemini with personalization gives you the option to use your Search history to deliver contextually relevant responses that are adapted to your individual interests. With your permission, Gemini can now tailor its responses based on your past searches, saving you time and delivering more precise answers. In the coming months, Gemini will expand its ability to understand you better by connecting with other Google apps and services, including Photos and YouTube.
We added updates for Gemini Live, Scam Detection and more in our March Pixel Drop. Our first Pixel Drop of the year included more helpful features and updates to your devices. Gemini Live, a conversational experience that helps you brainstorm ideas, simplify complex topics and rehearse for important moments was upgraded for better performance during multilingual conversation. You can now seamlessly switch between more than 45 languages when speaking to Gemini Live. Plus, features like stronger Scam Detection, better step accuracy and Auto-bedtime Mode bring even more AI to Pixel.
We released new AI tools for Google Shopping to help you find the perfect products. Our new immersive shopping features — like our vision match feature — can help you find clothes and beauty products to fit your style. With vision match, you can describe any garment you have in mind, and get back an AI-generated image that shows you what it could look like, along with similar shoppable products. It’s another way we’re making it even easier to find items that resonate with you.

We released Gemini Robotics to help bring AI into the physical world. We introduced two new AI models, based on Gemini 2.0, which are designed to lay the foundation for a new generation of helpful robots. The first is Gemini Robotics, an advanced vision-language-action (VLA) model that was built on Gemini 2.0 for the purpose of directly controlling robots. The second, Gemini Robotics-ER, is a Gemini model with advanced spatial understanding, enabling roboticists to run their own programs using Gemini’s embodied reasoning (ER) abilities.

We released Gemma 3 to help developers create even more helpful applications. Gemma 3, the latest version of our lightweight, state-of-the-art open models that can run on a single TPU or GPU, are designed to be our most advanced and portable open models for developers. They’re meant to run fast, directly on devices — from phones and laptops to workstations — helping developers create AI applications, wherever people need them.

The first FireSat satellite for early detection of wildfires made contact with Earth. The FireSat satellite launched from Vandenberg Space Force Base in California aboard SpaceX's Transporter-13 mission. This satellite — a collaboration between Google, Muon Space, Earth Fire Alliance, Moore Foundation, wildfire authorities and others — is the first of more than 50 in a first-of-its-kind constellation designed to ultimately use AI to detect and track wildfires as small as roughly 5x5 meters.
We launched three new initiatives to protect and restore nature using AI. A startup accelerator, kicking off in May 2025, includes programming, mentoring and technical support from Google. Google.org is also providing $3 million to support AI-enabled solutions for biodiversity, bioeconomy and agriculture from Brazilian nonprofits. And we released SpeciesNet, a Cloud-based, open-source AI model for identifying animal species from camera trap photos, enabling people to protect nature and biodiversity.