Google Tensor is a milestone for machine learning
Every few years, machine learning (ML) completely changes the way we use tech. And we're proud to say that sometimes, Google products are part of that. We’ve seen Google Assistant make your devices more helpful and Google Translate break down language barriers, but we haven’t always been able to bring the best of ML to the smartphone.
That’s why we made Google Tensor. A chip that can deliver totally new capabilities for Pixel users by keeping pace with the latest advancements in ML.
Built with Google Research
A few years ago, Google’s team of researchers came together across hardware, software and ML to build the best mobile ML computer to finally realize our vision of what should be possible on our Pixel smartphones.
Co-designing Google Tensor with Google Research gave us insight into where ML models are heading, not where they are today. This allowed us to build an AI/ML platform that could keep up with our work at Google.
With Google Tensor, we’re unlocking amazing new experiences that require state of the art ML — including Motion Mode, Face Unblur, Speech enhancement mode for videos and applying HDRnet to videos (more on these later). Google Tensor allows us to push the limits of helpfulness in a smartphone, taking it from a one-size-fits-all piece of hardware into a device that’s intelligent enough to respect and accommodate the different ways we use our phones.
Designed differently
We designed Google Tensor differently. Google Tensor was built to be a premium system on a chip (SoC) that has everything you would expect from a mobile SoC, and more.
So how did we do this? The core experience areas — speech, language, imaging and video — for our new phones are all heterogeneous by nature, meaning they require multiple resources across the entire chip. So we made sure that Google Tensor was carefully designed to deliver the right level of compute performance, efficiency and security. And with Android 12, we set out to build an OS that lays the foundation for the future of hardware and software working together. You can see this in real-world use cases, like taking amazing videos or understanding more foreign languages.
What Google Tensor can do
Collaboration across Google Research, hardware and software allowed us to bring new capabilities to Pixel 6 and Pixel 6 Pro. This is a result of Google Tensor running more advanced, state-of-the-art ML models but at lower power consumption compared to previous Pixel phones.
For example, Google Assistant on Google Tensor uses the most accurate Automatic Speech Recognition (ASR) ever released by Google. And for the first time we can use a high quality ASR model even for long-running applications such as Recorder or tools such as Live Caption without quickly draining the battery.
You’ll also be able to better communicate with people in the language you are most comfortable with thanks to Google Tensor and the new Live Translate feature on Pixel 6 and Pixel 6 Pro. More chat apps — like Messages and WhatsApp — will allow users to translate directly in the chat application, meaning no more cutting and pasting text into the Google Translate web service. Google Tensor also enables Live Translate to work on media like videos using on-device speech and translation models. Compared to previous models on Pixel 4 phones, the new on-device neural machine translation (NMT) model uses less than half the power when running on Google Tensor.
Google Tensor also powers computational photography and video features, which are part of what make Pixel such an impressive phone. Take one of our favorite new features as an example: Motion Mode.
Tensor’s heterogeneous architecture uses the entire chip to enable this new feature at a quality not achievable until now. Because our chip’s subsystems work better together, Tensor can handle photography tasks more quickly.
There’s also video, which is always a tough use case to solve for. We’ve always dreamed of getting Pixel video to match the quality of Pixel photos — and Google Tensor has helped us deliver better experiences in each area.
By embedding parts of HDRNet, a feature that delivers the signature Pixel look more efficiently, directly onto the chip, it now works in all video modes for the first time — even at 4K and 60 frames per second — to deliver recordings with more accurate and vivid colors.
You can also expect more accurate face detection on Pixel 6 and Pixel 6 Pro compared to previous Pixel phones. Not only will your phone locate and focus on your subject more quickly, but it will consume about half the power when compared to Pixel 5.
More protection with Tensor security core and Titan M2
Together, Titan M2, Google Tensor security core, and TrustZone™ running Trusty OS give Pixel 6 and Pixel 6 Pro the most layers of hardware security in any phone.
Our chip includes Tensor security core, a new CPU-based subsystem that works with the next generation of our dedicated security chip, Titan M2, to protect your sensitive user data. Independent security lab testing showed that Titan M2 can withstand attacks like electromagnetic analysis, voltage glitching and even laser fault injection. Yes, we literally shot lasers at our chip!
Google Tensor was built around the AI and ML work we’ve been doing in collaboration with Google Research, in order to deliver real-world user experiences. Tensor is unlocking experiences that weren’t possible until now. We love the idea that helpful technology is available whenever and wherever you need it, so whether Google Tensor is helping you use Motion Mode or bringing you higher quality translations, we can’t wait for you to try it out for yourself.