NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks

  title={NeuroCartography: Scalable Automatic Visual Summarization of Concepts in Deep Neural Networks},
  author={Haekyu Park and Nilaksh Das and Rahul Duggal and Austin P. Wright and Omar Shaikh and Fred Hohman and Duen Horng Chau},
  journal={IEEE Transactions on Visualization and Computer Graphics},
Existing research on making sense of deep neural networks often focuses on neuron-level interpretation, which may not adequately capture the bigger picture of how concepts are collectively encoded by multiple neurons. We present NEUROCARTOGRAPHY, an interactive system that scalably summarizes and visualizes concepts learned by neural networks. It automatically discovers and groups neurons that detect the same concepts, and describes how such neuron groups interact to form higher-level concepts… 

Figures from this paper

ConceptExplainer: Interactive Explanation for Deep Neural Networks from a Concept Perspective.

C ONCEPT E XPLAINER is a visual analytics system that enables non-expert users to interactively probe and explore the concept space to explain model behavior at the instance/class/global level.

Neural Activation Patterns (NAPs): Visual Explainability of Learned Concepts

A framework with which NAPs can be extracted from pre-trained models and provide a visual introspection tool that can be used to analyze N APs is released.

ScrutinAI: A Visual Analytics Approach for the Semantic Analysis of Deep Neural Network Predictions

ScrutinAI, a Visual Analytics approach to exploit semantic understanding for deep neural network (DNN) predictions analysis, focusing on models for object detection and semantic segmentation, aims to help analysts use their semantic understanding to identify and investigate potential weaknesses in DNN models.

ConceptEvo: Interpreting Concept Evolution in Deep Learning Training

ConceptEvo is presented, a general interpretation framework for DNNs that reveals the inception and evolution of detected concepts during training and discovers evolution across different models that are meaningful to humans, helpful for early-training intervention decisions, and crucial to the prediction for a given class.

In Defence of Visual Analytics Systems: Replies to Critics

The last decade has witnessed many visual analytics (VA) systems that make successful applications to wide-ranging domains like urban analytics and explainable AI. However, their research rigor and



Net2Vec: Quantifying and Explaining How Concepts are Encoded by Filters in Deep Neural Networks

  • Ruth FongA. Vedaldi
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
The Net2Vec framework is introduced, in which semantic concepts are mapped to vectorial embeddings based on corresponding filter responses, and it is shown that in most cases, multiple filters are required to code for a concept, and that filterembeddings are able to better characterize the meaning of a representation and its relationship to other concepts.

Synthesizing the preferred inputs for neurons in neural networks via deep generator networks

This work dramatically improves the qualitative state of the art of activation maximization by harnessing a powerful, learned prior: a deep generator network (DGN), which generates qualitatively state-of-the-art synthetic images that look almost real.

Understanding the role of individual units in a deep neural network

This work presents network dissection, an analytic framework to systematically identify the semantics of individual hidden units within image classification and image generation networks, and applies it to understanding adversarial attacks and to semantic image editing.

CNNPruner: Pruning Convolutional Neural Networks with Visual Analytics

The effectiveness of CNNPruner, a visual analytics approach that considers the importance of convolutional filters through both instability and sensitivity, and allows users to interactively create pruning plans according to a desired goal on model size or accuracy, is validated.

Understanding Neural Networks Through Deep Visualization

This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks.

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.

Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks

An algorithm is introduced that explicitly uncovers the multiple facets of each neuron by producing a synthetic visualization of each of the types of images that activate a neuron by separately synthesizing each type of image a neuron fires in response to.

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)

Concept Activation Vectors (CAVs) are introduced, which provide an interpretation of a neural net's internal state in terms of human-friendly concepts, and may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application.

Striving for Simplicity: The All Convolutional Net

It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.

Analyzing the Noise Robustness of Deep Neural Networks

A visual analytics approach to explain the primary cause of the wrong predictions introduced by adversarial examples and formulate the datapath extraction as a subset selection problem and approximately solve it based on back-propagation.