The latest AI news we announced in September
For more than 20 years, we’ve invested in machine learning and AI research, tools and infrastructure to build products that make everyday life better for more people. Teams across Google are working on ways to unlock AI’s benefits in fields as wide-ranging as healthcare, crisis response and education. To keep you posted on our progress, we're doing a regular roundup of Google's most recent AI news.
Here’s a look back at some of our AI announcements from September.

Forget pumpkin spice; the real news from September was the massive stack of AI updates. AI delivered major updates across our most popular core services like Chrome, Search and Android, making them significantly smarter. And the Gemini app became a powerhouse with its latest Gemini Drop, featuring the viral Nano Banana, Gemini Live, custom and shareable Gems, and the no-code app-building tool, Canvas. Meanwhile, even as we integrate AI across our digital products, Google DeepMind is busy working to bring helpful robots into the physical world.
With AI our overarching goal remains the same: to make AI as useful as possible, whether through the fun, visible features that help you with everyday tasks or in the essential, behind-the-scenes magic boosting your cybersecurity and learning.

We added major new AI features to Chrome. Gemini in Chrome now acts as an AI browsing assistant, letting you answer questions and find information across all of your open tabs. We also introduced AI Mode in the omnibox for asking complex, multi-part questions, alongside future agentic capabilities that will automate multi-step tasks like ordering groceries. Plus, AI is now keeping you safer by proactively blocking new types of scams and enhancing security and privacy features. To take a deeper look at the update, we shared how AI was built into the new, shinier Chrome.

We upgraded AI Mode in Search, making it easier to get inspired and search visually. By combining Gemini 2.5 and our new "visual search fan-out" technique, we’ve unlocked a deeper understanding of images and your natural language questions. Now, you’ve got stunningly precise visual search results that make everything from shopping to exploring new room designs more intuitive than ever before.
We shared five tips for Search Live, a new way to get help in real-time. By integrating an interactive voice conversation in AI Mode with the ability to share your phone’s camera feed, we’ve created a new way to get multimodal help in real-time. Search can now literally see what your camera sees and respond instantly, providing hands-free help with tasks like travel exploration, complex troubleshooting and bringing school projects to life.
We expanded AI Mode to new languages. This update brings our most powerful AI search experience, powered by a custom version of Gemini, to new languages globally: Hindi, Indonesian, Japanese, Korean, Brazilian Portuguese and Spanish. The expansion focuses on a nuanced understanding of local information so users can ask complex questions and explore the web more deeply in their preferred language.

We shared 10 ways you can use Nano Banana in the Gemini app. Since launching in August, Google DeepMind’s image generation and editing model, fondly known as Nano Banana, has quickly grown in popularity in the Gemini app. So we created 10 examples to show how capable and fun the model is, whether you’re looking for more straightforward tasks like swapping outfits in a photo, or interested in complex, imaginative image generation, like showing your adult self having a tea party with a younger you.
We made collaboration easier in the Gemini app with the ability to share custom Gems. Gems allow you to tailor Gemini for specific needs, and now you can share the ones you create with friends, family or coworkers. The sharing process is similar to Google Drive, giving you control over who can view or edit your personalized AI tools like detailed vacation guides or even custom meal planners.

We launched new Android features to help you polish and share what you write. The latest features in Android include new AI writing tools in Gboard to revise your tone and automatically fix spelling and grammar right on your device. We also announced the ability to let two people listen to the same audio simultaneously, introduced a way to create private QR code audio broadcasts and redesigned Quick Share for instant file transfer previews and live progress updates.

We've introduced the next step in bringing helpful robots into the physical world. Google DeepMind is leveling up robotics with Gemini Robotics 1.5 and Gemini Robotics-ER 1.5, kicking off the era of physical agents. These models let robots see, plan, think and use tools to tackle complicated, multi-step tasks far better than before, and allow for learnings to transfer between different types of robots. Think of Gemini Robotics-ER 1.5 as the smart brain, handling the big-picture reasoning (and even Googling stuff!) while Gemini Robotics 1.5 is the mover, turning visual information and instructions into motor commands for a robot to perform a task.

We introduced new features into NotebookLM to help with learning. Our latest updates turn NotebookLM into your ultimate personal AI study partner, with a focus on active learning. You can instantly create flashcards and quizzes grounded in your own notes, generate upgraded reports with suggested formats like a blog post or study guide, and try the Learning Guide option for personalized, step-by-step tutoring. Plus, you can now hear your sources in new ways with Audio Overviews that offer perspectives like a Critique or a Debate.
We launched new resources to promote AI literacy for parents, students and educators. These resources include a new podcast for parents called "Raising kids in the age of AI," expanded student programs like the Be Internet Awesome AI Literacy curriculum and the AI Quests game-based experience. This work includes substantial support for teachers, with over 650,000 educators trained so far and $40 million in grants dedicated to scaling AI literacy programs.
We introduced Guided Learning, a new, interactive study partner in the Gemini app. Powered by LearnLM, Guided Learning fine tunes our AI models for education, allowing people to navigate any topic, step-by-step, to ask questions and promote understanding. With helpful videos and images, the result is a personalized tutor that can break down complex code, create study plans from your uploaded material, and guide you to homework solutions without doing the work for you.
Sundar Pichai spoke at the White House AI Education Taskforce. Sundar highlighted Google’s major push to support AI education across the U.S., including offering Gemini for Education to every high school in America. It builds on Google’s broader $1 billion commitment to support AI education in the U.S., including giving all students and teachers access to our best AI tools, putting $150 million towards grants for AI education and digital wellbeing, and expanding our AI for Education Accelerator from 100 to 200 colleges and universities.

We sent our AI to the International Collegiate Programming Contest World Finals. Gemini 2.5 Deep Think achieved a major AI milestone by achieving gold-medal level performance at the International Collegiate Programming Contest (ICPC) World Finals. This breakthrough performance in abstract problem-solving builds on our previous gold at the International Mathematical Olympiad, proving Gemini's world-class coding and reasoning capabilities.