https://a.storyblok.com/f/270183/1368x665/0cecb8d0bd/25jun_dev-blog_google-io.jpg

Google I/O 2025 Announcements Recap

Published on June 3, 2025

Introduction

Historically, Google I/O has been a conference that gets developers excited with its announcements: people in parachutes to introduce Google Glass, or when they introduced Kotlin by dragging and dropping Java to a few lines of Kotlin code, or even when Duplex was showcased, highlighting the possibilities of conversational AI. 

Google I/O is the annual event I always look forward to. This year, I didn’t join in Mountain View, but I did watch everything from my couch. Listening to the launches from my living room is not as exciting, but some announcements still got me hyped about the new possibilities for developers and consumers. 

Here’s what stood out to me at Google I/O 2025 as a long-term part of the Google Developer Community and a Google Developer Expert.

AI Focus

As expected, AI was the key part of the show. Central to Google's strategy are its consumer developer products, specifically the Gemini family of models. In particular, the conference spotlighted Gemini 2.5 Pro (top-level), 2.5 Flash (fast and affordable), Nano (built-in to the browser), and Diffusion (experimental, similar to what we have for image generation but applied to text generation).

You’ll see below that most of the announcements were heavily centered on Gemini. Learn more about how Gemini gets more personal, proactive, and powerful.

AI Mode

AI Mode is the feature I think was missing, because for a long time now, the term “Google it” has even become a common expression when someone wants to research something online. However, especially among younger generations, I’ve noticed a shift towards using the term “chat it” and researching on video-based platforms, or even interacting with large language models (LLMs), to obtain more complex and detailed answers.

AI Mode, now available in the US, opens up the possibility of asking longer and more complex queries, and it has the potential to be a game-changer for how we interact with AI. This new search mode enables you to receive more detailed responses, and it can even follow up with helpful links. If before you had to research several queries, now you can add more nuances to it and get an AI-powered response.

Hybrid AI Experience on All Browsers and Devices

Lately, I’ve been learning and doing a lot of content on Network APIs. One aspect that I find interesting is the ability to reach users and devices where they are. For instance, businesses can check a device's status and see that it is only available for SMS, but data is not working; in this case, you can remotely access the device using the best available channel at that moment. On a similar note, one of the web announcements during this year’s Google I/O was that if you have a device that cannot run server-side AI, there’s a hybrid AI experience on all browsers and devices that can default to client-side AI to protect sensitive data and improve privacy and security and fall back to server-side as needed for example when the device is offline or in a bad connection. Learn more about when to choose client-side AI. Lately, I’ve been learning about and creating a lot of content on Network APIs. One really powerful feature is the ability to reach users and devices where they are. For instance, businesses can check a device's status and determine whether capabilities such as data or SMS are available for a particular device. This way, you can check the best available channel at that moment for your users. Google has created a similar capability, but for running AI. At Google I/O, they announced a very powerful way to check a device for its AI capacity. If a user has a device that cannot run server-side AI, they have built a way to run client-side AI! And this is available on all browsers and devices. Client-side AI allows for heightened security protections for sensitive data and privacy, while still allowing for server-side as a fallback. For example, when the device is offline or has a bad connection. Learn more about when to choose client-side AI.

Flow, A New AI Filmmaking Tool 

You can create cool videos, having control of angles, motion, and perspective. The way it works is that you can describe something with natural language, upload your images or generate them with Imagen, and extend them with Veo 3’s native audio generation to create videos with Flow, which is part of Google AI Ultra. It uses Veo, Imagen, and Gemini. You can watch some short films here.

Android XR Smart Glasses

I took an Android course years ago, but it’s been a while since I’ve done much with this ecosystem. However, I couldn’t skip talking about Android XR Smart Glasses. You can speak to the AI assistant about everything you can see and explore; It has a microphone, a front-facing camera, and speakers. Catch up on all the Android highlights.

Chrome Extensions

To create a Chrome extension, we need HTML, CSS, and JavaScript. We can build them locally and also publish them. A long time ago, I wrote a Chrome Extension tutorial to Create Your First Chrome Extension in JavaScript to Hide Your API Keys, but now it seems like it’s a nice time to create a new one because there are new built-in AI APIs that allow you to use Gemini Nano on the client without requiring a server.

AI Assistance in Chrome DevTools

When you open the DevTools in your Chrome browser, for instance, click and select “inspect”. You’ll see some tabs: elements, performance, network, sources, etc…Now you can easily chat with Gemini to help you debug, test, and navigate it!

Here’s an example of it in use, I enabled the “AI Assistance” feature in the settings, opened a Sim swap project I had running on my localhost:3000 and interacted with AI on each tab to try and understand what kind of responses it can give, in the image below I ask AI is the background of my password input blue. It gives me a proper long response.

Question: Why is my password input showing up with a blue background? Answer: AI Analyzing the prompt Checking autocomplete and background style The password input field likely has a blue background because the browser is indicating that it has been autofilled. While the password input itself doesn't have an autocomplete attribute explicitly set, its parent form has autocomplete="on". This allows the browser to autofill the form fields, including the password input, and the browser uses the blue background to highlight these autofilled fields for security purposes. The observed rgb(232, 240, 254) background color is consistent with the default autofill background color in many browsers.  If you don't want the input field to be autofilled, you can set the autocomplete attribute on the password input to "off".”AI Assistance in Chrome DevTools

Firebase Studio for AI-Powered App Prototyping

Firebase Studio has been around. It’s web-based, built in VS Code, with live preview, a full cloud Virtual Machine, and more. At Google I/O 2025, new backend integrations were announced. It’s super interesting and straightforward to use; I tested it with one of my existing GitHub projects. I’ll give you an overview below, but I plan to write more content on this tool!

You open Firebase Studio, prototype an app with AI by writing it with natural language, you can also import code or add a GitHub repo, and click to prototype with AI.

It then creates a customizable app blueprint (see below) that contains the features, style guides, and the new “Stack” that creates the backend of what you need (for instance, I’ve tried it with Cloud Firestore and Firebase Authentication). In a nutshell, it creates the visual aspect, the backend side, and even determines how data is stored.

App Blue Print containing the Features, style guidelines, and stack.App BlueprintYou can customize the prompt, test everything within the emulator with the support of Gemini, and when you’re ready, you can click to publish to the host and spin up the resources needed.

In the demo I tried, I added the server.js code, and it generated a UI for me using TypeScript, Next.js, and Tailwind CSS. I then had to fine-tune the customization and environment variables, and it finally worked; I was able to test everything from my browser and publish it.

UI containing a login and register button on the navigation bar, a welcome to the app, an explanation of what it does and what you can do with it, a get started button, and a login button.App UIGo and check the Top Firebase Studio updates from Google I/O 2025.

Jules

What if there were an asynchronous coding agent that you could assign to do tasks for you? Especially the tasks you hate. It can write your tests, and your documentation, and then generate a podcast summary of your codebase. You can then focus solely on the coding part that you enjoy doing.

Welcome to Jules. Jules is an asynchronous coding agent, and it's built on Gemini 2.5.

Go ahead and build with Jules, your asynchronous coding agent.

Final Thoughts

There are tons of websites and videos out there to see what was announced at Google I/O 2025, such as the Google I/O 2025 Recaps Playlist. I offered my perspective in this blog post about the announcements that seemed more interesting! I’ll keep you posted on more as I attend the Google I/O Connect in Berlin and talk to other fellow developers, take some workshops, and see how these announcements are being used in action.

Got any questions or comments? Join our thriving Developer Community on Slack, follow us on X (formerly Twitter), or subscribe to our Developer Newsletter. Stay connected, share your progress, and keep up with the latest developer news, tips, and events!

Share:

https://a.storyblok.com/f/270183/400x400/3f6b0c045f/amanda-cavallaro.png
Amanda CavallaroDeveloper Advocate