Skip to main content
The Keyword

11 ways we're innovating with AI

Several blue, yellow, green and red colored lines on a white background, going in different directions

AI is integral to so much of the work we do at Google. Fundamental advances in computing are helping us confront some of the greatest challenges of this century, like climate change. Meanwhile, AI is also powering updates across our products, including Search, Maps and Photos — demonstrating how machine learning can improve your life in both big and small ways. 

In case you missed it, here are some of the AI-powered updates we announced at Google I/O.


LaMDA is a breakthrough in natural language understanding for dialogue.

Human conversations are surprisingly complex. They’re grounded in concepts we’ve learned throughout our lives; are composed of responses that are both sensible and specific; and unfold in an open-ended manner. LaMDA — short for “Language Model for Dialogue Applications” — is a machine learning model designed for dialogue and built on Transformer, a neural network architecture that Google invented and open-sourced. We think that this early-stage research could unlock more natural ways of interacting with technology and entirely new categories of helpful applications. Learn more about LaMDA.


And MUM, our new AI language model, will eventually help make Google Search a lot smarter.

In 2019 we launched BERT, a Transformer AI model that can better understand the intent behind your Search queries. Multitask Unified Model (MUM), our latest milestone, is 1000x more powerful than BERT. It can learn across 75 languages at once (most AI models train on one language at a time), and it can understand information across text, images, video and more. We’re still in the early days of exploring MUM, but the goal is that one day you’ll be able to type a long, information-dense, and natural sounding query like “I’ve hiked Mt. Adams and now want to hike Mt. Fuji next fall, what should I do differently to prepare?” and more quickly find relevant information you need. Learn more about MUM.

 

Project Starline will help you feel like you’re there, together.

Imagine looking through a sort of magic window. And through that window, you see another person, life-size, and in three dimensions. You can talk naturally, gesture and make eye contact.


  • A woman communicates with her sister and baby using Project Starline.

    We brought in people to reconnect using Project Starline.

  • A woman communicates with her friend using Project Starline.

    We brought in people to reconnect using Project Starline.

  • A woman and a man communicate using sign language using Project Starline.

    We brought in people to reconnect using Project Starline.

  • A woman communicates with her friend using Project Starline.

    We brought in people to reconnect using Project Starline.

Project Starline is a technology project that combines advances in hardware and software to enable friends, family and co-workers to feel together, even when they're cities (or countries) apart. To create this experience, we’re applying research in computer vision, machine learning, spatial audio and real-time compression. And we’ve developed a light field display system that creates a sense of volume and depth without needing additional glasses or headsets. It feels like someone is sitting just across from you, like they’re right there. Learn more about Project Starline.


Within a decade, we’ll build the world’s first useful, error-corrected quantum computer. And our new Quantum AI campus is where it’ll happen. 

Confronting many of the world’s greatest challenges, from climate change to the next pandemic, will require a new kind of computing. A useful, error-corrected quantum computer will allow us to mirror the complexity of nature, enabling us to develop new materials, better batteries, more effective medicines and more. Our new Quantum AI campus — home to research offices, a fabrication facility, and our first quantum data center — will help us build that computer before the end of the decade. Learn more about our work on the Quantum AI campus.


Maps will help reduce hard-braking moments while you drive.

Soon, Google Maps will use machine learning to reduce your chances of experiencing hard-braking moments — incidents where you slam hard on your brakes, caused by things like sudden traffic jams or confusion about which highway exit to take. 

When you get directions in Maps, we calculate your route based on a lot of factors, like how many lanes a road has or how direct the route is. With this update, we’ll also factor in the likelihood of hard-braking. Maps will identify the two fastest route options for you, and then we’ll automatically recommend the one with fewer hard-braking moments (as long as your ETA is roughly the same). We believe these changes have the potential to eliminate over 100 million hard-braking events in routes driven with Google Maps each year. Learn more about our updates to Maps.


Your Memories in Google Photos will become even more personalized.

With Memories, you can already look back on important photos from years past or highlights from the last week. Using machine learning, we’ll soon be able to identify the less-obvious patterns in your photos. Starting later this summer, when we find a set of three or more photos with similarities like shape or color, we'll highlight these little patterns for you in your Memories. For example, Photos might identify a pattern of your family hanging out on the same couch over the years — something you wouldn’t have ever thought to search for, but that tells a meaningful story about your daily life. Learn more about our updates to Google Photos.


And Cinematic moments will bring your pictures to life.

When you’re trying to get the perfect photo, you usually take the same shot two or three (or 20) times. Using neural networks, we can take two nearly identical images and fill in the gaps by creating new frames in between. This creates vivid, moving images called Cinematic moments. 

Producing this effect from scratch would take professional animators hours, but with machine learning we can automatically generate these moments and bring them to your Recent Highlights. Best of all, you don’t need a specific phone; Cinematic moments will come to everyone across Android and iOS. Learn more about Cinematic moments in Google Photos.

Two very similar pictures of a child and their baby sibling get transformed into a moving image thanks to AI.

Cinematic moments bring your pictures to life, thanks to AI.

New features in Google Workspace help make collaboration more inclusive. 

In Google Workspace, assisted writing will suggest more inclusive language when applicable. For example, it may recommend that you use the word “chairperson” instead of “chairman” or “mail carrier'' instead of “mailman.” It can also give you other stylistic suggestions to avoid passive voice and offensive language, which can speed up editing and help make your writing stronger. Learn more about our updates to Workspace.


Google Shopping shows you the best products for your particular needs, thanks to our Shopping Graph.

To help shoppers find what they’re looking for, we need to have a deep understanding of all the products that are available, based on information from images, videos, online reviews and even inventory in local stores. Enter the Shopping Graph: our AI-enhanced model tracks products, sellers, brands, reviews, product information and inventory data — as well as how all these attributes relate to one another. With people shopping across Google more than a billion times a day, the Shopping Graph makes those sessions more helpful by connecting people with over 24 billion listings from millions of merchants across the web. Learn how we’re working with merchants to give you more ways to shop.


A dermatology assist tool can help you figure out what’s going on with your skin.

Each year we see billions of Google Searches related to skin, nail and hair issues, but it can be difficult to describe what you’re seeing on your skin through words alone.

With our CE marked AI-powered dermatology assist tool, a web-based application that we aim to make available for early testing in the EU later this year, it’s easier to figure out what might be going on with your skin. Simply use your phone’s camera to take three images of the skin, hair or nail concern from different angles. You’ll then be asked questions about your skin type, how long you’ve had the issue and other symptoms that help the AI to narrow down the possibilities. The AI model analyzes all of this information and draws from its knowledge of 288 conditions to give you a list of possible conditions that you can then research further. It’s not meant to be a replacement for diagnosis, but rather a good place to start. Learn more about our AI-powered dermatology assist tool.


And AI could help improve screening for tuberculosis.

Tuberculosis (TB) is one of the leading causes of death worldwide, infecting 10 million people per year and disproportionately impacting people in low-to-middle-income countries. It’s also really tough to diagnose early because of how similar symptoms are to other respiratory diseases. Chest X-rays help with diagnosis, but experts aren’t always available to read the results. That’s why the World Health Organization (WHO) recently recommended using technology to help with screening and triaging for TB. Researchers at Google are exploring how AI can be used to identify potential TB patients for follow-up testing, hoping to catch the disease early and work to eradicate it. Learn more about our ongoing research into tuberculosis screening.


Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe