How well do you know our I/O 2025 announcements?

Another Google I/O is in the books, and it was our most exciting one yet. This year, Googlers took the stage to share how we’re releasing new intelligent AI models, agentic products and personalized features faster than ever before, making them helpful for everyone. We announced updates to our Gemini models and the Gemini app, AI Mode in Search, our generative AI technology and even more. Test how much you know about our biggest I/O announcements with this quiz. (And if you want to study up first, check out this list of the many, many things we announced at Google I/O 2025.)
-
1/18
True or false: We’re making Gemini 2.5 Pro even better by introducing an enhanced reasoning mode we’re calling Deep Think.
-
Don’t think twice about your answer, because it’s correct: We’re making Gemini 2.5 Pro even better by introducing Deep Think, an experimental, enhanced reasoning mode for highly-complex math and coding. We’re making it available to trusted testers via the Gemini API to get their feedback before making it widely available.
Think again: We’re making Gemini 2.5 Pro even better by introducing Deep Think, an experimental, enhanced reasoning mode for highly-complex math and coding. We’re making it available to trusted testers via the Gemini API to get their feedback before making it widely available.
Don’t think twice about your answer, because it’s correct: We’re making Gemini 2.5 Pro even better by introducing Deep Think, an experimental, enhanced reasoning mode for highly-complex math and coding. We’re making it available to trusted testers via the Gemini API to get their feedback before making it widely available.
Think again: We’re making Gemini 2.5 Pro even better by introducing Deep Think, an experimental, enhanced reasoning mode for highly-complex math and coding. We’re making it available to trusted testers via the Gemini API to get their feedback before making it widely available.
2/18
We’re starting to roll out AI Mode, our most powerful AI search, to:
-
That’s right, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
Better yet, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
That’s right, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
Better yet, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
That’s right, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
Better yet, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
That’s right, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
Better yet, we’re rolling out AI Mode for everyone in the U.S. — no Labs sign-up required. AI Mode uses more advanced reasoning and multimodality, and the ability to go deeper through follow-up questions and helpful links to the web. Over the coming weeks, you’ll see a new tab for AI Mode appear in Search and in the search bar in the Google app. And starting this week, we're bringing a custom version of Gemini 2.5, our most intelligent model, into Search for both AI Mode and AI Overviews in the U.S.
3/18
Veo 3, our new state-of-the-art video generation model, not only improves on the quality of Veo 2, but also generates what for the first time?
-
Sounds about right: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
It’s music to our ears: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
Sounds about right: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
It’s music to our ears: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
Sounds about right: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
It’s music to our ears: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
Sounds about right: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
It’s music to our ears: Veo 3 can generate videos with audio — traffic noises in the background of a city street scene, birds singing in a park, even dialogue between characters. Across the board, Veo 3 excels from text and image prompting to real-world physics and accurate lip syncing. Veo 3 is available today for Ultra subscribers in the United States in the Gemini app and in Flow. It’s also available for enterprise users on Vertex AI.
4/18
What’s the name of our new AI subscription plan with the highest usage limits and access to our most capable models and premium features?
-
That’s ultra-correct. Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
The correct answer is Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
That’s ultra-correct. Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
The correct answer is Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
That’s ultra-correct. Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
The correct answer is Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
That’s ultra-correct. Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
The correct answer is Google AI Ultra, which offers access to our most capable models and premium features, including Gemini, Flow and Whisk. You’ll also have access to our agentic research prototype, Project Mariner. Google AI Ultra is starting to roll out in the U.S. for $249.99/month (with a special offer for first-time users of 50% off for your first three months), and coming soon to more countries.
5/18
Which updated Gemini model did we just make available to everyone in the Gemini app?
-
That’s right! Gemini 2.5 Flash is our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
Not quite! The right answer is Gemini 2.5 Flash, our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
That’s right! Gemini 2.5 Flash is our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
Not quite! The right answer is Gemini 2.5 Flash, our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
That’s right! Gemini 2.5 Flash is our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
Not quite! The right answer is Gemini 2.5 Flash, our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
That’s right! Gemini 2.5 Flash is our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
Not quite! The right answer is Gemini 2.5 Flash, our fast, cost-efficient, thinking model. The new 2.5 Flash is now available for preview in Google AI Studio for developers, in Vertex AI for enterprise and in the Gemini app for everyone. And in early June, it will be generally available for production.
6/18
What’s the name of our new AI filmmaking tool custom-designed for Google’s most advanced models — Veo, Imagen and Gemini?
-
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
Let’s just say we’re going with the flow. Built with and for creatives, Flow can help storytellers explore their ideas without bounds and create cinematic clips and scenes for their stories by bringing together Veo, Imagen and Gemini. It’s available today for Google AI Pro and Ultra plan subscribers in the U.S., with more countries coming soon.
7/18
True or false: You can now get a complete, customized Deep Research report that combines public data with your own uploaded files.
-
Yes, it’s not too good to be true: Now that you can upload your own PDFs, images and files from Drive to Deep Research, you’ll get a holistic understanding that cross-references your unique knowledge with broader trends all in one place, saving you time and revealing connections you might have otherwise missed.
It’s actually not too good to be true: Now that you can upload your own PDFs, images and files from Drive to Deep Research, you’ll get a holistic understanding that cross-references your unique knowledge with broader trends all in one place, saving you time and revealing connections you might have otherwise missed.
Yes, it’s not too good to be true: Now that you can upload your own PDFs, images and files from Drive to Deep Research, you’ll get a holistic understanding that cross-references your unique knowledge with broader trends all in one place, saving you time and revealing connections you might have otherwise missed.
It’s actually not too good to be true: Now that you can upload your own PDFs, images and files from Drive to Deep Research, you’ll get a holistic understanding that cross-references your unique knowledge with broader trends all in one place, saving you time and revealing connections you might have otherwise missed.
8/18
With Search Live, you’ll be able to talk back-and-forth with Search using your ____.
-
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
Live from your camera, it’s…Search Live! We’re bringing Project Astra’s live capabilities into Search so you can talk back-and-forth with Search about what you see in real time, using your camera. For example, if you’re feeling stumped on a project and need some help, simply tap the “Live” icon in AI Mode or in Lens, point your camera and ask your question. Just like that, Search becomes a learning partner that can see what you see.
9/18
On average, how much longer are people’s conversations with Gemini Live than their text-based Gemini conversations?
-
High five, that’s right! People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
Even longer: People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
High five, that’s right! People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
Even longer: People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
High five, that’s right! People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
Even longer: People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
High five, that’s right! People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
Even longer: People love Gemini Live. In fact, the conversations are five times longer than text-based conversations on average because it offers new ways to get help, whether it's troubleshooting a broken appliance or getting personalized shopping advice.
10/18
What is Agent Mode?
-
Your brain’s not on vacation mode: Agent Mode is a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Agent Mode is actually a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Your brain’s not on vacation mode: Agent Mode is a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Agent Mode is actually a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Your brain’s not on vacation mode: Agent Mode is a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Agent Mode is actually a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Your brain’s not on vacation mode: Agent Mode is a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
Agent Mode is actually a catchy name for a new experimental capability arriving on desktop soon when you upgrade the Gemini app to the Ultra plan, where you’ll be able to simply state your objective, and Gemini intelligently orchestrates the steps to achieve it. Agent Mode combines advanced features like live web browsing, in-depth research and smart integrations with your Google apps so it can manage complex, multi-step tasks from start to finish with minimal oversight from you.
11/18
If you head to Search Labs in the U.S., what can you upload to virtually try on billions of apparel listings?
-
That’s right! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
Nope! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
That’s right! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
Nope! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
That’s right! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
Nope! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
That’s right! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
Nope! With our “try on” experiment, online shoppers using Search Labs in the U.S. can now try billions of apparel listings just by uploading a single image of themselves. It’s powered by a new custom image generation model, which understands the human body and the nuances of clothing — like how different materials fold, stretch and drape on different bodies.
12/18
In the coming weeks, we’ll make Gemini Live more personal by connecting some of your favorite Google apps so you can take actions mid-conversation. Which app(s) will you be able to connect?
-
Exactly: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Even better: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Exactly: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Even better: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Exactly: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Even better: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Exactly: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
Even better: Gemini Live will integrate more deeply into your daily life starting with Google Maps, Calendar, Tasks and Keep, with more app connections coming later. You can always manage these app connections and your information anytime in the app’s settings.
13/18
AI Overviews are now available in more than ____ countries and territories and more than ____ languages.
-
You’re right on target. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
Not quite. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
You’re right on target. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
Not quite. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
You’re right on target. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
Not quite. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
You’re right on target. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
Not quite. AI Overviews are now available in more than 200 countries and territories and in more than 40 languages, with support added for Arabic, Chinese, Malay, Urdu and more.
14/18
Our new video communication platform Google Beam combines our AI video model and ____ to transform standard 2D video streams into realistic 3D experiences.
-
You’re not in the dark: Google Beam combines our AI video model and a light field display, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
Google Beam actually combines a light field display with our AI video model, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
You’re not in the dark: Google Beam combines our AI video model and a light field display, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
Google Beam actually combines a light field display with our AI video model, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
You’re not in the dark: Google Beam combines our AI video model and a light field display, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
Google Beam actually combines a light field display with our AI video model, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
You’re not in the dark: Google Beam combines our AI video model and a light field display, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
Google Beam actually combines a light field display with our AI video model, allowing you to make eye contact, read subtle cues and build understanding and trust as if you were face-to-face.
15/18
In an I/O demo, XR product manager Nishtha Bhatia used Gemini on her Android XR glasses to recall a detail about the coffee she had backstage. What was that detail?
-
You’ve got a latte going for you, because that’s correct. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
That’s not right, but we still like you a latte. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
You’ve got a latte going for you, because that’s correct. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
That’s not right, but we still like you a latte. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
You’ve got a latte going for you, because that’s correct. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
That’s not right, but we still like you a latte. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
You’ve got a latte going for you, because that’s correct. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
That’s not right, but we still like you a latte. Nishtha used Gemini on her Android XR glasses to remember the name of the coffee shop. She also used her glasses to schedule a coffee at that cafe for later in the day, take a picture of I/O attendees and translate a conversation in Hindi and Farsi in real time.
16/18
Speech translation in Google Meet translates your spoken words into your listener’s preferred language — while preserving your voice and expression. Which languages are now available?
-
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
Talk about exciting: Google Meet’s near real-time, low-latency speech translation is available now to Google AI Pro and Ultra subscribers in beta, initially in English and Spanish, with more languages coming in the next few weeks. We're also further developing this capability for businesses, with early testing coming to Workspace customers this year. Speech translation makes sure your voice, tone and expressions still shine through — even when translated — allowing people speaking different languages to have natural conversations.
17/18
We’re infusing LearnLM directly into Gemini 2.5, which is now the world’s leading model for learning. LearnLM is our family of models and capabilities that is:
-
This answer gets an A+. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
Almost, but the A+ answer is all of the above. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
This answer gets an A+. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
Almost, but the A+ answer is all of the above. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
This answer gets an A+. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
Almost, but the A+ answer is all of the above. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
This answer gets an A+. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
Almost, but the A+ answer is all of the above. LearnLM is our family of models and capabilities fine-tuned for learning and built in partnership with education experts. With LearnLM, Gemini adheres to the principles of learning science to go beyond just giving you the answer. Instead, Gemini can explain how you get there, helping you untangle even the most complex questions and topics so you can learn more effectively.
18/18
Finally, what did I/O originally stand for?
-
You’re an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
No, but you can still be an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
You’re an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
No, but you can still be an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
You’re an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
No, but you can still be an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
You’re an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
No, but you can still be an I/O pro. Originally, the name I/O was based on the first two digits in a googol (a one, followed by 100 zeroes), the number that lends our company its name. According to lore, I/O has evolved to also nod to “input / output,” referencing the computational concept of interfacing between a computer system and the outside world, and “innovation in the open.” Pretty fitting, don’t you think?
You have answered 0/18 questions.
-