Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps

@article{Kim2019WhyAS,
  title={Why are Saliency Maps Noisy? Cause of and Solution to Noisy Saliency Maps},
  author={Beomsu Kim and Junghoon Seo and Seunghyun Jeon and Jamyoung Koo and Jeongyeol Choe and Taegyun Jeon},
  journal={2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)},
  year={2019},
  pages={4149-4157}
}
Saliency Map, the gradient of the score function with respect to the input, is the most basic technique for interpreting deep neural network decisions. However, saliency maps are often visually noisy. Although several hypotheses were proposed to account for this phenomenon, there are few works that provide rigorous analyses of noisy saliency maps. In this paper, we first propose a new hypothesis that noise may occur in saliency maps when irrelevant features pass through ReLU activation… Expand
Removing Brightness Bias in Rectified Gradients
TLDR
It is demonstrated that dark areas of an input image are not highlighted by a saliency map using Rectified Gradients, even if it is relevant for the class or concept, and a brightness bias is identified. Expand
Rethinking Positive Aggregation and Propagation of Gradients in Gradient-based Saliency Methods
TLDR
This work empirically show that two approaches for handling the gradient information, namely positive aggregation, and positive propagation, break these methods, and proposes several variants of aggregation methods with positive handling of gradient information. Expand
Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models
  • L. Brocki, N. C. Chung
  • Computer Science, Mathematics
  • 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)
  • 2019
TLDR
This work introduces a new method of obtaining saliency maps for latent representations of known or novel high-level concepts, often called concept vectors in generative models, and applies it to the VAE's latent space of CelebA dataset. Expand
Relevance-CAM: Your Model Already Knows Where to Look
With increasing fields of application for neural networks and the development of neural networks, the ability to explain deep learning models is also becoming increasingly important. Especially,Expand
Underwhelming Generalization Improvements From Controlling Feature Attribution
TLDR
This work describes a simple method for taking advantage of auxiliary labels, by training networks to ignore the distracting features which may be extracted outside of the region of interest, on the training images for which such masks are available. Expand
Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs
TLDR
This study uses the attributions that filter out irrelevant parts of the input features and then verifies the effectiveness of this approach by measuring the classification accuracy of a pre-trained DNN. Expand
Explaining Regression Based Neural Network Model
TLDR
Comparative results show that the proposed method, named AGRA for Accurate Gradient, outperforms state-of-the-art methods for locating time-steps where errors occur in the signal. Expand
Explaining Neural Network Model for Regression
  • 2020
Explaining Neural Networks via Perturbing Important Learned Features
TLDR
This work proposes a novel input feature attribution method that finds an input perturbation that maximally changes the output neuron by exclusively perturbing important hidden neurons (i.e. learned features) on the path to output neuron. Expand
Glimpse: A Gaze-Based Measure of Temporal Salience
TLDR
Glimpse is a novel measure to compute temporal salience based on the observer-spatio-temporal consistency of raw gaze data that is conceptually simple, training free, and provides a semantically meaningful quantification of visual attention over time. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 28 REFERENCES
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
TLDR
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks. Expand
SmoothGrad: removing noise by adding noise
TLDR
SmoothGrad is introduced, a simple method that can help visually sharpen gradient-based sensitivity maps and lessons in the visualization of these maps are discussed. Expand
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
TLDR
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image. Expand
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
Striving for Simplicity: The All Convolutional Net
TLDR
It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Expand
A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations
TLDR
This analysis shows that the backward ReLU introduced by GBP and DeconvNet, and the local connections in CNNs are the two main causes of compelling visualizations. Expand
Visualizing Higher-Layer Features of a Deep Network
TLDR
This paper contrast and compare several techniques applied on Stacked Denoising Autoencoders and Deep Belief Networks, trained on several vision datasets, and shows that good qualitative interpretations of high level features represented by such models are possible at the unit level. Expand
Evaluating Feature Importance Estimates
TLDR
ROAR, RemOve And Retrain is introduced, a benchmark to evaluate the accuracy of interpretability methods that estimate input feature importance in deep neural networks and averaging a set of squared noisy estimators leads to significant gains in accuracy for each method considered. Expand
Evaluating the Visualization of What a Deep Neural Network Has Learned
TLDR
A general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps and shows that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. Expand
Towards better understanding of gradient-based attribution methods for Deep Neural Networks
TLDR
This work analyzes four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them, and constructs a unified framework which enables a direct comparison, as well as an easier implementation. Expand
...
1
2
3
...