Skip to main content
The Keyword

Google+

Saving you bandwidth through machine learning

Article's hero media

Photographers of all specialities, skills and genres have long made their home on Google+, sharing their work with a supportive community. Whether it’s of toys, travel or street art, each photo has a unique story to tell, and deserves to be viewed at the best possible resolution.

Traditionally, viewing images at high resolution has also meant using lots of bandwidth, leading to slower loading speeds and higher data costs. For many folks, especially those where data is pricey or the internet is spotty, this is a significant concern.

To help everyone be able to see the beautiful photos that photographers share to Google+ in their full glory, we’ve turned to machine learning and a new technology called RAISR. RAISR, which was introduced in November, uses machine learning to produce great quality versions of low-resolution images, allowing you to see beautiful photos as the photographers intended them to be seen. By using RAISR to display some of the large images on Google+, we’ve been able to use up to 75 percent less bandwidth per image we’ve applied it to.
How RAISR works

While we’ve only begun to roll this out for high-resolution images when they appear in the streams of a subset of Android devices, we’re already applying RAISR to more than 1 billion images per week, reducing these users’ total bandwidth by about a third. In the coming weeks we plan to roll this technology out more broadly — and we’re excited to see what further time and data savings we can offer.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe