MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis

@article{Anirudh2021MARGINUD,
  title={MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis},
  author={Rushil Anirudh and Jayaraman J. Thiagarajan and Rahul Sridhar and Timo Bremer},
  journal={Frontiers in Big Data},
  year={2021},
  volume={4}
}
Interpretability has emerged as a crucial aspect of building trust in machine learning systems, aimed at providing insights into the working of complex neural networks that are otherwise opaque to a user. There are a plethora of existing solutions addressing various aspects of interpretability ranging from identifying prototypical samples in a dataset to explaining image predictions or explaining mis-classifications. While all of these diverse techniques address seemingly different aspects of… 
Graphs as Tools to Improve Deep Learning Methods
TLDR
This chapter is composed of four main parts: tools for visualizing intermediate layers in a DNN, denoising data representations, optimizing graph objective functions and regularizing the learning process.
Graph-based Deep Learning Analysis and Instance Selection
TLDR
This paper analyzes the behavior of deep learning outputs by using the K-nearest neighbor (KNN) graph construction and proposes two new instance selection methods, that both lead to fewer isolated nodes, by either directly eliminating them or by connecting them more strongly to other points (maximization).
Deep Geometric Knowledge Distillation with Graphs
TLDR
The ability of the proposed method to efficiently distillate knowledge from the teacher to the student, leading to better accuracy for the same budget as compared to existing RKD alternatives is demonstrated.
Explaining Latent Representations with a Corpus of Examples
TLDR
SimplEx is a user-centred method that provides example-based explanations with reference to a freely selected set of examples, called the corpus that improves the user’s understanding of the latent space with post-hoc explanations.
Laplacian networks: bounding indicator function smoothness for neural networks robustness
TLDR
A regularizer based on the Laplacian of similarity graphs obtained from the representation of training data at each layer of the DL architecture is introduced, which penalizes large changes in the distance between examples of different classes and enforces smooth variations of the class boundaries.
Introducing Graph Smoothness Loss for Training Deep Learning Architectures
TLDR
It is shown that this novel loss function, which consists in minimizing the smoothness of label signals on similarity graphs built at the output of the architecture, leads to similar performance in classification as architectures trained using the classical cross-entropy, while offering interesting degrees of freedom and properties.
Shapley Homology: Topological Analysis of Sample Influence for Neural Networks
TLDR
The Shapley homology framework is proposed, which provides a quantitative metric for the influence of a sample of the homology of a simplicial complex and shows that samples with higher influence scores have a greater impact on the accuracy of neural networks that determine graph connectivity and on several regular grammars whose higher entropy values imply greater difficulty in being learned.
Improved Visual Localization via Graph Smoothing
TLDR
This work introduces a framework to enhance the performance of retrieval based localization methods by taking into account the additional information including GPS coordinates and temporal neighbourhood of the images provided by the acquisition process in addition to the descriptor similarity of pairs of images in the reference or query database which is used traditionally for localization.
Graph Signal Processing of Indefinite and Complex Graphs using Directed Variation
TLDR
This work extends techniques to concepts of signal variation appropriate for indefinite and complex-valued graphs and uses them to define a GFT for these classes of graph.
The Analysis of Artificial Neural Network Structure Recovery Possibilities Based on the Theory of Graphs
TLDR
It is proposed to use the methods of the spectral graph theory and the graph signal processing as tools for analyzing the ANN structure to solve the problem of detecting an ANN structure.
...
1
2
...

References

SHOWING 1-10 OF 56 REFERENCES
Interpretability of deep learning models: A survey of results
  • Supriyo Chakraborty, Richard J. Tomsett, +12 authors Prudhvi K. Gurram
  • Computer Science
    2017 IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computed, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI)
  • 2017
TLDR
Some of the dimensions that are useful for model interpretability are outlined, and prior work along those dimensions are categorized, in the process of performing a gap analysis of what needs to be done to improve modelinterpretability.
A Unified Approach to Interpreting Model Predictions
TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
TLDR
A novel method based on highly efficient random walks to structure the convolutions and a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model are developed.
Axiomatic Attribution for Deep Networks
We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Learning Important Features Through Propagating Activation Differences
TLDR
DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented.
Interpretable Explanations of Black Boxes by Meaningful Perturbation
  • Ruth C. Fong, A. Vedaldi
  • Computer Science, Mathematics
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
A general framework for learning different kinds of explanations for any black box algorithm is proposed and the framework to find the part of an image most responsible for a classifier decision is specialised.
Intriguing properties of neural networks
TLDR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.
Semi-Supervised Classification with Graph Convolutional Networks
TLDR
A scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs which outperforms related methods by a significant margin.
Accurate and Robust Feature Importance Estimation under Distribution Shifts
TLDR
This paper proposes PRoFILE, a novel feature importance estimation method jointly trained with the predictive model and a causal objective that can accurately estimate the feature importance scores even under complex distribution shifts, without any additional re-training.
...
1
2
3
4
5
...