• Corpus ID: 224803693

Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability

@article{Phang2020InvestigatingAS,
  title={Investigating and Simplifying Masking-based Saliency Methods for Model Interpretability},
  author={Jason Phang and Jungkyu Park and Krzysztof J Geras},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.09750}
}
Saliency maps that identify the most informative regions of an image for a classifier are valuable for model interpretability. A common approach to creating saliency maps involves generating input masks that mask out portions of an image to maximally deteriorate classification performance, or mask in an image to preserve classification performance. Many variants of this approach have been proposed in the literature, such as counterfactual generation and optimizing over a Gumbel-Softmax… 

New Definitions and Evaluations for Saliency Methods: Staying Intrinsic, Complete and Sound

A simple saliency method is described that matches or outperforms prior methods in the evaluations and suggests new intrinsic justifications, based on soundness, for popular heuristic tricks such as TV regularization and upsampling.

Attribution Mask: Filtering Out Irrelevant Features By Recursively Focusing Attention on Inputs of DNNs

This study uses the attributions that filter out irrelevant parts of the input features and then verifies the effectiveness of this approach by measuring the classification accuracy of a pre-trained DNN.

Tell Me, What Do You See?—Interpretable Classification of Wiring Harness Branches with Deep Neural Networks

This work proposes several different neural network architectures that are tested on a novel dataset and equipped with saliency maps, which allow the user to gain in-depth insight into the classifier’s operation, including a way of explaining the responses of the deep neural network and making system predictions interpretable by humans.

Linear Connectivity Reveals Generalization Strategies

This work demonstrates how the geometry of the loss surface can guide models towards different heuristic functions, and measures performance on specially-crafted diagnostic datasets that these clusters correspond to different generalization strategies.

References

SHOWING 1-10 OF 37 REFERENCES

Sanity Checks for Saliency Maps

It is shown that some existing saliency methods are independent both of the model and of the data generating process, and methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model.

Real Time Image Saliency for Black Box Classifiers

A masking model is trained to manipulate the scores of the classifier by masking salient parts of the input image to generalise well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems.

Explaining Image Classifiers by Counterfactual Generation

This work can sample plausible image in-fills by conditioning a generative model on the rest of the image, and optimize to find the image regions that most change the classifier's decision after in-fill.

Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps

This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks.

Generative Image Inpainting with Contextual Attention

This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.

SmoothGrad: removing noise by adding noise

SmoothGrad is introduced, a simple method that can help visually sharpen gradient-based sensitivity maps and lessons in the visualization of these maps are discussed.

Classifier-agnostic saliency map extraction

RISE: Randomized Input Sampling for Explanation of Black-box Models

The problem of Explainable AI for deep neural networks that take images as input and output a class probability is addressed and an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction is proposed.

Grad-CAM: Why did you say that? Visual Explanations from Deep Networks via Gradient-based Localization

It is shown that Guided Grad-CAM helps untrained users successfully discern a "stronger" deep network from a "weaker" one even when both networks make identical predictions, and also exposes the somewhat surprising insight that common CNN + LSTM models can be good at localizing discriminative input image regions despite not being trained on grounded image-text pairs.

Interpretable Explanations of Black Boxes by Meaningful Perturbation

A general framework for learning different kinds of explanations for any black box algorithm is proposed and the framework to find the part of an image most responsible for a classifier decision is specialised.