Understanding neural networks through deep visualization

Main Figure: 



Figure 1: Visualizations of different units on layer fc8, the 1000-dimensional output of a convolutional neural network just before the final softmax. In nearly every case, one can guess what class a neuron represents by looking at these images.

Author(s): 
Yosinski J
Clune J
Nguyen A
Fuchs T
Lipson H
Year: 
2015
Abstract: 

Recent years have produced great advances in training large, deep neural networks (DNNs), including notable successes in training convolutional neural networks (convnets) to recognize natural images. However, our understanding of how these models work, especially what computations they perform at intermediate layers, has lagged behind. Progress in the field will be further accelerated by the development of better tools for visualizing and interpreting neural nets. We introduce two such tools here. The first is a tool that visualizes the activations produced on each layer of a trained convnet as it processes an image or video (e.g. a live webcam stream). We have found that looking at live activations that change in response to user input helps build valuable intuitions about how convnets work. The second tool enables visualizing features at each layer of a DNN via regularized optimization in image space. Because previous versions of this idea produced less recognizable images, here we introduce several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations. Both tools are open source and work on a pre-trained convnet with minimal setup.


Understanding Neural Networks Through Deep Visualization

Understanding Neural Networks Through Deep Visualization

Pub. Info: 
ICML Deep Learning workshop
BibTeX: