• Publications
  • Influence
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
TLDR
This work proposes a general solution to the problem of understanding classification decisions by pixel-wise decomposition of nonlinear classifiers by introducing a methodology that allows to visualize the contributions of single pixels to predictions for kernel-based classifiers over Bag of Words features and for multilayered neural networks. Expand
Explaining nonlinear classification decisions with deep Taylor decomposition
TLDR
A novel methodology for interpreting generic multilayer neural networks by decomposing the network classification decision into contributions of its input elements by backpropagating the explanations from the output to the input layer is introduced. Expand
Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
TLDR
A deep neural network-based approach to image quality assessment (IQA) that allows for joint learning of local quality and local weights in an unified framework and shows a high ability to generalize between different databases, indicating a high robustness of the learned features. Expand
Evaluating the Visualization of What a Deep Neural Network Has Learned
TLDR
A general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps and shows that the recently proposed layer-wise relevance propagation algorithm qualitatively and quantitatively provides a better explanation of what made a DNN arrive at a particular classification decision than the sensitivity-based approach or the deconvolution method. Expand
Methods for interpreting and understanding deep neural networks
TLDR
The second part of the tutorial focuses on the recently proposed layer-wise relevance propagation (LRP) technique, for which the author provides theory, recommendations, and tricks, to make most efficient use of it on real data. Expand
Explaining Recurrent Neural Network Predictions in Sentiment Analysis
TLDR
This work applies a specific propagation rule applicable to multiplicative connections as they arise in recurrent network architectures such as LSTMs and GRUs to a word-based bi-directional LSTM model on a five-class sentiment prediction task and evaluates the resulting LRP relevances both qualitatively and quantitatively. Expand
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
TLDR
Two approaches to explaining predictions of deep learning models are presented, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. Expand
Unmasking Clever Hans predictors and assessing what machines really learn
TLDR
The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines. Expand
Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data
TLDR
Sparse ternary compression (STC) is proposed, a new compression framework that is specifically designed to meet the requirements of the federated learning environment and advocate for a paradigm shift in federated optimization toward high-frequency low-bitwidth communication, in particular in the bandwidth-constrained learning environments. Expand
Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers
TLDR
This paper proposes an approach to extend layer-wise relevance propagation to neural networks with local renormalization layers, which is a very common product-type non-linearity in convolutional neural networks. Expand
...
1
2
3
4
5
...