Our work in artificial intelligence helps us make our products — like Pixel — as helpful as they can be. Whether you’re using your Pixel to translate a foreign language, edit photos or take a phone call in a noisy area — AI is already making everyday moments easier.
Here are seven ways AI — mostly made possible by Google Tensor’s custom-built chip — makes Pixel better.
- Magic Eraser lets you fix those almost-perfect shots. It uses machine learning to identify and remove unwanted distractions in your photos — like strangers in the background or telephone wires. You can then choose to erase them all at once or tap to remove them one by one. You can also circle or brush over what you want out of the photo and Magic Eraser will use machine learning to determine exactly what you’re trying to remove.
The Delicate Arch in Arches National Park, Utah, United States, with people around it
The Delicate Arch in Arches National Park, Utah, United States, without people around it
2. Photo Unblur helps bring your blurry photos back into focus with just a few taps — a new feature only available on Pixel 7 and Pixel 7 Pro. Using a model we developed that runs on-device, it detects and reduces blur and visual noise to improve the quality of the whole photo and any faces in it. This even works on photos not taken on Pixel — so you can restore pictures of your grandparents or kids.
3. Real Tone uses computer vision, a type of artificial intelligence, to “see and understand” more skin tones and represent them beautifully, authentically and more accurately. Real Tone’s improvements to the way Pixel Camera renders skin tones was developed in partnership with external image experts — including photographers, cinematographers and colorists — to test our cameras and expand our dataset to feature 25 times more images of people of color than before.
4. Super Res Zoom combines details from multiple shots and stitches them together to enhance image quality and sharpness. This is all possible thanks to a blend of hardware and software. So users will be able to get sharp, quality images from a distance, so their friends will think they were courtside when they show them the pictures you took of your sports hero in action.
5. The Call Assist feature suite uses Google AI to solve some common problems with the original “feature” of our smartphones, making and receiving calls. Clear Calling helps you easily hear the person on the other end of the line, using machine learning to filter out background noise from windy streets to noisy restaurants. Whilst Call Screen, using on-device models, lets you know who and why an unknown number is calling before you pick up.
6. Guided Frame helps users who are blind or have low vision capture selfies. Using the front-facing camera, computer vision and Google’s Talkback mode, it guides you through taking a selfie, giving clear instructions (like “move your phone slightly left”) on how to tilt and rotate the camera to get everyone in the frame.
7. Live Translate enables real-time translation without an app and without an internet connection - not just text-based but also spoken words, interpreting live audio from one speaker to another. That means you can read text in another language by pointing the camera at a sign or a menu, or watch a video that isn’t in your native tongue with Live Caption. And thanks to Google Tensor it can run on-device rather than through a network and server.