100 things we announced at I/O

Yesterday at Google I/O, we shared how we’re taking the progress we’re making in AI and applying it across our products. Major upgrades are coming to our Gemini app, our generative AI tools and everything in between — including some truly incredible progress we’re making with our AI models (and new ways you can access them).
Here’s a list of I/O 2025’s highlights — many of which you can try today!
Ask anything with AI in Search
- Try it now! AI Mode is starting to roll out for everyone in the U.S. right on Search. But if you want to get access right away, opt in via Labs.
- For questions where you want an even more thorough response, we’re bringing deep research capabilities into AI Mode in Labs, with Deep Search.
- Live capabilities from Project Astra are coming to AI Mode in Labs. With Search Live, coming this summer, you can talk back-and-forth with Search about what you see in real-time, using your camera.
- We’re also bringing agentic capabilities from Project Mariner to AI Mode in Labs, starting with event tickets, restaurant reservations and local appointments.
- Coming soon: When you need some extra help crunching numbers or visualizing data, AI Mode in Labs will analyze complex datasets and create graphics that bring them to life, all custom built for your query. We’ll bring this to sports and finance queries.
- We’re introducing a new AI Mode shopping experience that brings together advanced AI capabilities with our Shopping Graph to help you browse for inspiration, think through considerations and find the right product for you.
- Try it now! You can virtually try on billions of apparel listings just by uploading a photo of yourself. Our “try on” experiment is rolling out to Search Labs users in the U.S. starting today — opt in to try it out now.
- We also showed off a new agentic checkout to help you buy at a price that fits your budget with ease. Just tap “track price” on any product listing, set what you want to spend and we’ll let you know if the price drops.
- We shared some updates on AI Overviews: Since last year’s I/O, AI Overviews have scaled up to 1.5 billion monthly users in 200 countries and territories. That means Google Search is bringing generative AI to more people than any other product in the world.
- In our biggest markets like the U.S. and India, AI Overviews is driving over 10% increase in usage of Google for the types of queries that show AI Overviews.
- And starting this week, Gemini 2.5 is coming to Search for both AI Mode and AI Overviews in the U.S.
Try new, helpful features for Gemini
12. Try it now! Now Gemini is an even better study partner with our new interactive quiz feature. Simply ask Gemini to “create a practice quiz on…” and Gemini will generate questions.
13. In the coming weeks we’ll also make Gemini Live more personal by connecting some of your favorite Google apps so you can take actions mid-conversation, like adding something to your calendar or asking for more details about a location. We’re starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later.
14. Try it now! Starting today, camera and screen sharing capabilities for Gemini Live are beginning to roll out beyond Android to Gemini app users on iOS.
15. Try it now! Starting today, we’re introducing a new Create menu within Canvas that helps you explore the breadth of what Canvas can build for you, allowing you to transform text into interactive infographics, web pages, immersive quizzes and even podcast-style Audio Overviews in 45 languages.
16. Try it now! Starting today, you can upload PDFs and images directly into Deep Research so your research reports draw from a combination of public information and details that you provide.
17. Soon, you’ll be able to link your documents from Drive or from Gmail and customize the sources Deep Research pulls from, like academic literature.
18. We announced Agent Mode, an experimental feature where you will be able to simply describe your end goal and Gemini can get things done on your behalf. An experimental version of Agent Mode in the Gemini app will be coming soon to Google AI Ultra subscribers.
19. Try it now! Gemini in Chrome will begin rolling out on desktop to Google AI Pro and Google AI Ultra subscribers in the U.S. who use English as their Chrome language on Windows and macOS.
20. The Gemini app now has over 400 million monthly active users.
Learn more about advancements for Gemini models
21. With our latest update, Gemini 2.5 Pro is now the world-leading model across the WebDev Arena and LMArena leaderboards.
22. We’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. As detailed in our latest report, Gemini 2.5 Pro outperformed competitors on every category of learning science principles.
23. We introduced a new preview version of our leading model, Gemini 2.5 Flash, with stronger performance on coding and complex reasoning tasks that is optimized for speed and efficiency.
24. 2.5 Flash is now available to everyone in the Gemini app, and we'll make our updated version generally available in Google AI Studio for developers and in Vertex AI for enterprises in early June, with 2.5 Pro soon after.
25. 2.5 Pro will get even better with Deep Think, an experimental, enhanced reasoning mode for highly-complex math and coding.
26. We’re bringing new capabilities to both 2.5 Pro and 2.5 Flash, including advanced security safeguards. Our new security approach helped significantly increase Gemini’s protection rate against indirect prompt injection attacks during tool use, making Gemini 2.5 our most secure model family to date.
27. We're bringing Project Mariner's computer use capabilities into the Gemini API and Vertex AI. Companies like Automation Anywhere, UiPath, Browserbase, Autotab, The Interaction Company and Cartwheel are exploring its potential, and we're excited to roll it out more broadly for developers to experiment with this summer.
28. Both 2.5 Pro and Flash will now include thought summaries in the Gemini API and in Vertex AI. Thought summaries take the model’s raw thoughts and organize them into a clear format with headers, key details and information about model actions, like when they use tools.
29. We launched 2.5 Flash with thinking budgets to give developers more control over cost by balancing latency and quality, and we’re extending this capability to 2.5 Pro. This allows you to control the number of tokens a model uses to think before it responds, or even turn its thinking capabilities off. Gemini 2.5 Pro with budgets will be generally available for stable production use in the coming weeks, along with our generally available model.
30. We added native SDK support for Model Context Protocol (MCP) definitions in the Gemini API for easier integration with open-source tools. We’re also exploring ways to deploy MCP servers and other hosted tools, making it easier for you to build agentic applications.
31. We introduced a new research model, called Gemini Diffusion. This text diffusion model learns to generate outputs by converting random noise into coherent text or code, like how our current models in image and video generation work. We’ll continue our work on different approaches to lowering latency in all our Gemini models, with a faster 2.5 Flash Lite coming soon.
Access our AI tools with new options
32. We introduced Google AI Ultra, a new AI subscription plan with the highest usage limits and access to our most capable models and premium features, plus 30 TB of storage and access to YouTube Premium.
33. Google AI Ultra is available in the U.S. now, with more countries coming soon. It’s $249.99 a month, with a special offer for first-time users of 50% off for your first three months.
34. College students in the U.S., Brazil, Indonesia, Japan and the U.K. are also eligible to get a free upgrade of Gemini for a whole school year — more countries are coming soon.
35. There’s also Google Al Pro, which gives you a suite of Al tools for $19.99/month. This Pro plan will level up your Gemini app experience. It also includes products like Flow, NotebookLM and more, all with special features and higher rate limits.
Explore your creativity with new generative AI
36. Try it now! We announced Veo 3, which lets you generate video with audio and is now available in the Gemini app for Google AI Ultra subscribers in the U.S., as well as in Vertex AI.
37. We also added new capabilities to our popular Veo 2 model, including new camera controls, outpainting and object add and remove.
38. We showed you four new films created with Veo alongside other tools and techniques. View these films from our partners and other inspirational content on Flow TV.
39. Try it now! Imagen 4 is our latest Imagen model, and it has remarkable clarity in fine details like skin, fur and intricate textures, and excels in both photorealistic and abstract styles. Imagen 4 is available today in the Gemini app.
40. Imagen 4 is also available in Whisk, and to enterprises in Vertex AI.
41. Soon, Imagen 4 will be available in a Fast version that’s up to 10x faster than Imagen 3.
42. Imagen 4 can create images in a range of aspect ratios and up to 2K resolution so you can get even higher-quality for printing and presentations.
43. It is also significantly better at spelling and typography, making it easier to create your own greeting cards, posters and even comics.
44. Try it now! Flow is our new AI filmmaking tool. Using Google DeepMind’s best-in-class models, Flow lets you weave cinematic films with control of characters, scenes and styles, so more people than ever can create visually striking movies with AI.
45. Flow is available today for Google AI Pro and Ultra plan subscribers in the United States.
46. In April, we expanded access to Music AI Sandbox, powered by Lyria 2. Lyria 2 brings powerful composition and endless exploration, and is now available for creators through YouTube Shorts and enterprises in Vertex AI.
47. Lyria 2 can arrange rich vocals that sound like a solo singer or a full choir.
48. Lyria RealTime is an interactive music generation model that allows anyone to interactively create, control, and perform music in real time. This model is now available via the Gemini API in Google AI Studio and Vertex AI.
49. We announced a partnership between Google DeepMind and Primordial Soup, a new venture dedicated to storytelling innovation founded by pioneering director Darren Aronofsky. Primordial Soup is producing three short films using Google DeepMind’s generative AI models, tools and capabilities, including Veo.
50. The first film, “ANCESTRA,” is directed by award-winning filmmaker Eliza McNitt and will premiere at the Tribeca Festival on June 13, 2025.
51. To make it easier for people and organizations to detect AI-generated content, we announced SynthID Detector, a verification portal that helps to quickly and efficiently identify content that is watermarked with SynthID.
52. And since launch, SynthID has already watermarked over 10 billion pieces of content.
53. We are starting to roll out the SynthID Detector portal to a group of early testers. Journalists, media professionals and researchers can join our waitlist to gain access to the SynthID Detector.
Take a look at the future of AI assistance
54. We’re working to extend our best multimodal foundation model, Gemini 2.5 Pro, to become a “world model” that can make plans and imagine new experiences by understanding and simulating aspects of the world, just as the brain does.
55. Updates to Project Astra, our research prototype that explores the capabilities of a universal AI assistant, include more natural voice output with native audio, improved memory and computer control. Over time we’ll bring these new capabilities to Gemini Live and new experiences in Search, Live API for devs and new form factors like Android XR glasses.
56. And as part of our Project Astra research, we partnered with the visual interpreting service Aira to build a prototype that assists members of the blind and low-vision community with everyday tasks, complementing the skills and tools they already use.
57. With Project Astra, we’re prototyping a conversational tutor that can help with homework. Not only can it follow along with what you’re working on, but it can also walk you through problems step-by-step, identify mistakes and even generate diagrams to help explain concepts if you get stuck.
58. This research experience will be coming to Google products later this year and Android Trusted Testers can sign up for the waitlist for a preview.
59. We took a look at the first Android XR device coming later this year: Samsung’s Project Moohan. This headset will offer immersive experiences on an infinite screen.
60. And we shared a sneak peek at how Gemini will work on glasses with Android XR in real-world scenarios, including messaging friends, making appointments, asking for turn-by-turn directions, taking photos and more.
61. We even demoed live language translation between two people, showing the potential for these glasses to break down language barriers.
62. Android XR prototype glasses are now in the hands of trusted testers, who are helping us make sure we’re building a truly assistive product and doing so in a way that respects privacy for you and those around you.
63. Plus we’re partnering with innovative eyewear brands, starting with Gentle Monster and Warby Parker, to create glasses with Android XR that you’ll want to wear all day.
64. We’re advancing our partnership with Samsung to go beyond headsets and extend Android XR to glasses. Together we’re creating a software and reference hardware platform that will enable the ecosystem to make great glasses. Developers will be able to start building for this platform later this year.
Communicate better, in near real time
65. A few years ago, we introduced Project Starline, a research project that enabled remote conversations that used 3D video technology to make it feel like two people were in the same room. Now, it’s evolving into a new platform called Google Beam.
66. We’re working with Zoom and HP to bring the first Google Beam devices to market with select customers later this year. We’re also partnering with industry leaders like Zoom, Diversified and AVI-SPL to bring Google Beam to businesses and organizations worldwide.
67. You’ll even see the first Google Beam products from HP at InfoComm in a few weeks.
68. We announced speech translation, which is available now in Google Meet. This translation feature not only happens in near real-time, thanks to Google AI, but it’s able to maintain the quality, tone, and expressiveness of someone’s voice. The free-flowing conversation enables people to understand each other and feel connected, with no language barrier.
Build better with developer launches
69. Over 7 million developers are building with Gemini, five times more than this time last year.
70. Gemini usage on Vertex AI is up 40 times compared to this time last year.
71. We’re releasing new previews for text-to-speech in 2.5 Pro and 2.5 Flash. These have first-of-its-kind support for multiple speakers, enabling text-to-speech with two voices via native audio out. Like Native Audio dialogue, text-to-speech is expressive, and can capture really subtle nuances, such as whispers. It works in over 24 languages and seamlessly switches between them.
72. The Live API is introducing a preview version of audio-visual input and native audio out dialogue, so you can directly build conversational experiences.
73. Try it now! Jules is a parallel, asynchronous agent for your GitHub repositories to help you improve and understand your codebase. It is now open to all developers in beta. With Jules you can delegate multiple backlog items and coding tasks at the same time, and even get an audio overview of all the recent updates to your codebase.
74. Gemma 3n is our latest fast and efficient open multimodal model that’s engineered to run smoothly on your phones, laptops, and tablets. It handles audio, text, image, and video. The initial rollout is underway on Google AI Studio and Google Cloud with plans to expand to open-source tools in the coming weeks.
75. Try it now! Google AI Studio now has a cleaner UI, integrated documentation, usage dashboards, new apps, and a new Generate Media tab to explore and experiment with our cutting-edge generative models, including Imagen, Veo and native image generation.
76. Colab will soon be a new, fully agentic experience. Simply tell Colab what you want to achieve, and watch as it takes action in your notebook, fixing errors and transforming code to help you solve hard problems faster.
77. SignGemma is an upcoming open model that translates sign language into spoken language text, (best at American Sign Language to English), enabling developers to create new apps and integrations for Deaf and Hard of Hearing users.
78. MedGemma is our most capable open model for multimodal medical text and image comprehension designed for developers to adapt and build their health applications, like analyzing medical images. MedGemma is available now for use now as part of Health AI Developer Foundations.
79. Stitch is a new AI-powered tool to generate high-quality UI designs and corresponding frontend code for desktop and mobile by using natural language descriptions or image prompts.
80. Try it now! We announced Journeys in Android Studio, which lets developers test critical user journeys using Gemini by describing test steps in natural language.
81. Version Upgrade Agent in Android Studio is coming soon to automatically update dependencies to the latest compatible version, parsing through release notes, building the project and fixing any errors.
82. We introduced new updates across the Google Pay API designed to help developers create smoother, safer, and more successful checkout experiences, including Google Pay in Android WebViews.
83. Flutter 3.32 has new features designed to accelerate development and enhance apps.
84. And we shared updates for our Agent Development Kit (ADK), the Vertex AI Agent Engine, and our Agent2Agent (A2A) protocol, which enables interactions between multiple agents.
85. Try it now! Developer Preview for Wear OS 6 introduces Material 3 Expressive and updated developer tools for Watch Faces, richer media controls and the Credential Manager for authentication.
86. Try it now! We announced that Gemini Code Assist for individuals and Gemini Code Assist for GitHub are generally available, and developers can get started in less than a minute. Gemini 2.5 now powers both the free and paid versions of Gemini Code Assist, features advanced coding performance; and helps developers excel at tasks like creating visually compelling web apps, along with code transformation and editing.
87. Here’s an example of a recent update you can explore in Gemini Code Assist: Quickly resume where you left off and jump into new directions with chat history and threads.
88. Firebase announced new features and tools to help developers build AI-powered apps more easily, including updates to the recently launched Firebase Studio and Firebase AI Logic, which enables developers to integrate AI into their apps faster.
89. We also introduced a new Google Cloud and NVIDIA developer community, a dedicated forum to connect with experts from both companies.
Work smarter with AI enhancements
90. Gmail is getting new, personalized smart replies that incorporate your own context and tone. They’ll pull from your past emails and files in your Drive to draft a response, while also matching your typical tone so your replies sound like you. Try it yourself later this year.
91. Try it now! Google Vids is now available to Google AI Pro and Ultra users.
92. Try it now! Starting today, we’re making the NotebookLM app available on Play Store and App Store, to help users take Audio Overviews on the go.
93. Also for NotebookLM, we’re bringing more flexibility to Audio Overviews, allowing you to select the ideal length for your summaries, whether you prefer a quick overview or a deeper exploration.
94. Video Overviews are coming soon to NotebookLM, helping you turn dense information like PDFs, docs, images, diagrams and key quotes into more digestible narrated overviews.
95. We even shared one of our NotebookLM notebooks with you — which included a couple of previews of Video Overviews!
96. Our new Labs experiment Sparkify helps you turn your questions into a short animated video, made possible by our latest Gemini and Veo models. These capabilities will be coming to Google products later this year, but in the meantime you can sign up for the waitlist for a chance to try it out.
97. We’re also bringing improvements based on your feedback to Learn About, an experiment in Labs where conversational AI meets your curiosity.
Finally… we’ll leave you with a few numbers:
99. As Sundar shared in his opening keynote, people are adopting AI more than ever before. As one example: This time last year, we were processing 9.7 trillion tokens a month across our products and APIs. Now, we’re processing over 480 trillion — 50 times more.
100. Given that, it’s no wonder that the word “AI” was said 92 times during the keynote. But the amount of “AIs” we heard actually took second place — to Gemini! ♊
