Share This Author
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
- S. Bach, Alexander Binder, G. Montavon, F. Klauschen, K. Müller, W. Samek
- Computer SciencePloS one
- 10 July 2015
This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks.
Deep One-Class Classification
This paper introduces a new anomaly detection method—Deep Support Vector Data Description—, which is trained on an anomaly detection based objective and shows the effectiveness of the method on MNIST and CIFAR-10 image benchmark datasets as well as on the detection of adversarial examples of GTSRB stop signs.
Explaining nonlinear classification decisions with deep Taylor decomposition
Evaluating the Visualization of What a Deep Neural Network Has Learned
- W. Samek, Alexander Binder, G. Montavon, S. Lapuschkin, K. Müller
- Computer ScienceIEEE Transactions on Neural Networks and Learning…
- 21 September 2015
A general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps and shows that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method.
Deep Semi-Supervised Anomaly Detection
This work presents Deep SAD, an end-to-end deep methodology for general semi-supervised anomaly detection, and introduces an information-theoretic framework for deep anomaly detection based on the idea that the entropy of the latent distribution for normal data should be lower than the entropy the anomalous distribution, which can serve as a theoretical interpretation for the method.
Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers
This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks.
Unmasking Clever Hans predictors and assessing what machines really learn
- S. Lapuschkin, S. Wäldchen, Alexander Binder, G. Montavon, W. Samek, K. Müller
- Computer ScienceNature Communications
- 26 February 2019
The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.
The SHOGUN Machine Learning Toolbox
A machine learning toolbox designed for unified large-scale learning for a broad range of feature types and learning settings, which offers a considerable number of machine learning models such as support vector machines, hidden Markov models, multiple kernel learning, linear discriminant analysis, and more.
Understanding and Comparing Deep Neural Networks for Age and Gender Classification
- W. Samek, Alexander Binder, S. Lapuschkin, K. Müller
- Computer ScienceIEEE International Conference on Computer Vision…
- 25 August 2017
This work compares four popular neural network architectures, studies the effect of pretraining, evaluates the robustness of the considered alignment preprocessings via cross-method test set swapping and intuitively visualizes the model's prediction strategies in given preprocessing conditions using the recent Layer-wise Relevance Propagation (LRP) algorithm.
Analyzing Classifiers: Fisher Vectors and Deep Neural Networks
- S. Bach, Alexander Binder, G. Montavon, K. Müller, W. Samek
- Computer ScienceIEEE Conference on Computer Vision and Pattern…
- 1 December 2015
This paper extends the LRP framework for Layer-wise Relevance Propagation for Fisher vector classifiers and uses it as analysis tool to quantify the importance of context for classification, qualitatively compare DNNs against FV classifiers in terms of important image regions and detect potential flaws and biases in data.