Building with AI: highlights for developers at Google I/O

We believe developers are the architects of the future. That's why Google I/O is our most anticipated event of the year, and a perfect moment to bring developers together and share our efforts for all the amazing builders out there.
In that spirit, we updated Gemini 2.5 Pro Preview with even better coding capabilities a few weeks ago. Today, we're unveiling a new wave of announcements across our developer products, designed to make building transformative AI applications even better.
Here are details on the latest updates.
Gemini 2.5 Flash Preview is now even better
- Gemini 2.5 Flash Preview: Today we’re introducing a new version of our leading model with stronger performance on coding and complex reasoning tasks that is optimized for speed and efficiency.
- Better transparency and control: Thought summaries are now available across our 2.5 models and we’ll bring thinking budgets to 2.5 Pro Preview soon to help developers further manage costs and control how our models think before they respond.
- Availability: For now, both versions of Gemini 2.5 Flash as well as 2.5 Pro will appear in Google AI Studio and Vertex AI in Preview, with general availability for Flash coming in early June and Pro soon to follow.
New models for developers’ use cases
Today, we’re introducing new models to give developers even more variety to choose from to meet their specific building requirements.
- Gemma 3n: Our latest fast and efficient open multimodal model engineered to run smoothly on your phones, laptops and tablets. It handles audio, text, image and video. You can preview the new model today on Google AI Studio and with Google AI Edge. Learn more in the blog.
- Gemini Diffusion: This new state-of-the-art text model isn't just fast — it's very fast, the experimental demo of Gemini Diffusion released today generates at five times the speed of our fastest model so far, while matching its coding performance. If you're interested in getting access, you can sign up for the waitlist today.
- Lyria RealTime: This is a new experimental interactive music generation model that allows anyone to interactively create, control and perform music in real time. Lyria RealTime is available via the Gemini API and you can try it in the starter app in Google AI Studio.
Additional Gemma family variants
- MedGemma: Our most capable open model for multimodal medical text and image comprehension designed for developers to adapt and build their health applications, like analyzing medical images. MedGemma is available for use now as part of Health AI Developer Foundations.
- SignGemma: An upcoming open model that translates sign languages into spoken language text (best at American Sign Language to English), enabling developers to create new apps and integrations for Deaf and Hard of Hearing users. Share your input at goo.gle/SignGemma.
Tools to make building with AI even easier
- A new, more agentic Colab: This will soon be a new, fully agentic experience. Simply tell Colab what you want to achieve, and watch as it takes action in your notebook, fixing errors and transforming code to help you solve hard problems faster.
- Gemini Code Assist: Our free, AI-coding assistant, Gemini Code Assist for individuals, and our code review agent, Gemini Code Assist for GitHub, are now generally available for all developers. In addition, Gemini 2.5 now powers Gemini Code Assist, and a 2 million token context window will come to Gemini Code Assist Standard and Enterprise developers when it’s available on Vertex AI.
- Firebase Studio: Firebase Studio, our new cloud-based AI workspace, is making it even easier for developers to turn their ideas into full-stack AI apps. Developers can bring Figma designs to life right in Firebase Studio using the builder.io plugin, and rolling out starting today, we're introducing functionality to detect when your app needs a backend and provision it for you.
- Jules: Now available to everyone, Jules is an asynchronous coding agent that gets out of your way, so you can focus on the coding you want to do, while Jules picks up the random tasks that you’d rather not. It can tackle your backlog of bugs, handle multiple tasks at once, and even take the first cut at building out a new feature. Jules works directly with GitHub, clones your repository to a Cloud VM and when you’re ready, creates a PR that you can merge back into your project.
- Stitch: Introducing a new AI-powered tool to generate high-quality UI designs and corresponding frontend code for desktop and mobile by using natural language descriptions or image prompts. Stitch lets users bring ideas to life, lightning fast. Iterate on your designs conversationally, adjust themes, and easily export your creations to CSS/HTML or Figma to keep going.
Building with the Gemini API
- Google AI Studio updates: The fastest place to start building with the Gemini API, leveraging cutting-edge Gemini 2.5 models along with new generative media models like Imagen, Veo and native image generation. We’ve also integrated Gemini 2.5 Pro into Google AI Studio’s native code editor, enabling you to prototype faster. It’s tightly optimized with the GenAI SDK so you can instantly generate web apps from text, image or video prompts. Start from a simple prompt, or get inspired by starter apps in the showcase.
- Native Audio Output & Live API: We’re also introducing new Gemini 2.5 Flash model in Preview that include several new features — proactive video where the model can detect and remember key events, proactive audio where the model chooses not to respond to irrelevant audio signals and affective dialog where the model can respond to a user’s tone. The model is starting to roll out later today.
- Native Audio Dialogue: Starting later today, developers can now use a preview of new Gemini 2.5 Flash and 2.5 Pro text-to-speech (TTS) capabilities, enabling sophisticated single and multi-speaker speech output. With the new controllable TTS models, developers can now precisely direct voice style, accent and pace for highly customized AI-generated audio.
- Asynchronous Function Calling: This new feature will enable longer running functions or tools to be called in the background without blocking the main conversational flow.
- Computer Use API: A new feature for developers to build applications that can browse the web or use other software tools under your direction. It’s available today in the Gemini API to Trusted Testers and will rollout to more developers later this year.
- URL Context: We are adding support for a new experimental tool, URL context, which retrieves full page context from URLs. This can be used by itself or in conjunction with other tools such as Google Search.
- Model Context Protocol: We are also announcing the Gemini API and SDK will support Model Context Protocol (MCP) to make it easy for developers to use a wide range of open source tools.
That’s not all! There’s even more coming out of Google I/O for developers. Tune-in to the Developer Keynote at 1:30pm PT to find out everything we announced to help developers build with the best of Google AI.
