Skip to main content
The Keyword

Understanding the inner workings of neural networks

Article's hero media

Neural networks are a powerful approach to machine learning, allowing computers to understand images, recognize speech, translate sentences, play Go, and much more. As much as we’re using neural networks in our technology at Google, there’s more to learn about how these systems accomplish these feats. For example, neural networks can learn how to recognize images far more accurately than any program we directly write, but we don’t really know how exactly they decide whether a dog in a picture is a Retriever, a Beagle, or a German Shepherd.

We’ve been working for several years to better grasp how neural networks operate. Last week we shared new research on how these techniques come together to give us a deeper understanding of why networks make the decisions they do—but first, let’s take a step back to explain how we got here.

Neural networks consist of a series of “layers,” and their understanding of an image evolves over the course of multiple layers. In 2015, we started a project called DeepDream to get a sense of what neural networks “see” at the different layers. It led to a much larger research project that would not only develop beautiful art, but also shed light on the inner workings of neural networks.

pasted image 0 (11).png
Outside Google, DeepDream grew into a small art movement producing all sorts of amazing things.

Last year, we shared new work on this subject, showing how techniques building on DeepDream—and lots of excellent research from our colleagues around the world—can help us explore how neural networks build up their understanding of images. We showed that neural networks build on previous layers to detect more sophisticated ideas and eventually reach complex conclusions. For instance, early layers detect edges and textures of images, but later layers progress to detecting parts of objects.

pasted image 0 (12).png
The neural network first detects edges, then textures, patterns, parts, and objects.

Last week we released another milestone in our research: an exploration of how different techniques for understanding neural networks fit together into a bigger picture.

This work, which we've published in the online journal Distill, explores how different techniques allow us to “stand in the middle of a neural network” and see how decisions made at an individual point influence a final output. For instance, we can see how a network detects a “floppy ear,” and then that increases the probability that the image will be labeled as a Labrador Retriever or Beagle.

In one example, we explore which neurons activate in response to different inputs—a kind of “MRI for neural networks.” The network has some floppy ear detectors that really like this dog!

SemanticDict2-ezgif-crop.gif

We can also see how different neurons in the middle of the network—like those floppy ear detectors—affect the decision to classify an image as a Labrador Retriever or tiger cat.

pasted image 0 (13).png

If you want to learn more, check out our interactive paper, published in Distill. We’ve also open sourced our neural net visualization library, Lucid, so you can make these visualizations, too.

Let’s stay in touch. Get the latest news from Google in your inbox.

Subscribe