Share This Author
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
- S. Bach, Alexander Binder, G. Montavon, F. Klauschen, K. Müller, W. Samek
- Computer SciencePloS one
- 10 July 2015
This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.
Explaining nonlinear classification decisions with deep Taylor decomposition
Evaluating the Visualization of What a Deep Neural Network Has Learned
- W. Samek, Alexander Binder, G. Montavon, S. Lapuschkin, K. Müller
- Computer ScienceIEEE Transactions on Neural Networks and Learning…
- 21 September 2015
A general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps and shows that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method.
Methods for interpreting and understanding deep neural networks
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
This work applies a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs to a word-based bi-directional LSTM model on a five-class sentiment prediction task and evaluates the resulting LRP relevances both qualitatively and quantitatively.
Layer-Wise Relevance Propagation: An Overview
This chapter gives a concise introduction to LRP with a discussion of how to implement propagation rules easily and efficiently, how the propagation procedure can be theoretically justified as a ‘deep Taylor decomposition’, how to choose the propagation rules at each layer to deliver high explanation quality, and how LRP can be extended to handle a variety of machine learning scenarios beyond deep neural networks.
Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers
This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks.
Unmasking Clever Hans predictors and assessing what machines really learn
- S. Lapuschkin, S. Wäldchen, Alexander Binder, G. Montavon, W. Samek, K. Müller
- Computer ScienceNature Communications
- 26 February 2019
The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.
A Unifying Review of Deep and Shallow Anomaly Detection
This review aims to identify the common underlying principles and the assumptions that are often made implicitly by various methods in deep learning, and draws connections between classic “shallow” and novel deep approaches and shows how this relation might cross-fertilize or extend both directions.
"What is relevant in a text document?": An interpretable machine learning approach
A measure of model explanatory power is introduced and it is shown that, although the SVM and CNN models perform similarly in terms of classification accuracy, the latter exhibits a higher level of explainability which makes it more comprehensible for humans and potentially more useful for other applications.