Higher-Order Explanations of Graph Neural Networks via Relevant Walks.

@article{Schnake2021HigherOrderEO,
  title={Higher-Order Explanations of Graph Neural Networks via Relevant Walks.},
  author={Thomas Schnake and Oliver Eberle and Jonas Lederer and Shinichi Nakajima and Kristof T. Schutt and Klaus-Robert Muller and Gr{\'e}goire Montavon},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2021},
  volume={PP}
}
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e. by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such… 

Figures and Tables from this paper

Explaining GNN over Evolving Graphs using Information Flow
TLDR
This work defines the problem of explaining evolving GNN predictions and proposes an axiomatic attribution method to uniquely decompose the change in a prediction to paths on computation graphs, and formulates a novel convex optimization problem to optimally select the paths that explain the prediction evolution.
Explaining Graph Neural Networks for Vulnerability Discovery
TLDR
This study demonstrates that explaining GNNs is a non-trivial task and all evaluation criteria play a role in assessing their efficacy, and shows that graph-specific explanations relate better to code semantics and provide more information to a security expert than regular methods.
FlowX: Towards Explainable Graph Neural Networks via Message Flows
We investigate the explainability of graph neural networks (GNNs) as a step towards elucidating their working mechanisms. While most current methods focus on explaining graph nodes, edges, or
DEGREE: D ECOMPOSITION B ASED E XPLANATION FOR G RAPH N EURAL N ETWORKS
TLDR
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction and designs a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
F LOW X: T OWARDS E XPLAINABLE G RAPH N EURAL N ETWORKS VIA M ESSAGE F LOWS
  • Computer Science
  • 2021
TLDR
This work proposes a novel method, known as FlowX, to explain GNNs by identifying important message flows, and proposes an approximation scheme to compute Shapley-like values as initial assessments of further redistribution training to improve explainability.
Explainability in Graph Neural Networks: A Taxonomic Survey
TLDR
A unified and taxonomic view of current GNN explainability methods is provided, shed lights on the commonalities and differences of existing methods and set the stage for further methodological developments.
Toward Explainable AI for Regression Models
TLDR
The fundamental conceptual differences of XAI for regression and classification tasks are clarified, novel theoretical insights and analysis for XAIR are established, demonstrations of XAIR on genuine practical regression problems are provided, and the challenges remaining for the field are discussed.
Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications
TLDR
This work aims to provide a timely overview of this active emerging field of XAI, with a focus on “post hoc” explanations, and explain its theoretical foundations, and put interpretability algorithms to a test both from a theory and comparative evaluation perspective.
GraphFramEx: Towards Systematic Evaluation of Explainability Methods for Graph Neural Networks
TLDR
This paper proposes the first systematic evaluation framework for GNN explainability, considering explainability on three different “user needs:” explanation focus, mask nature, and mask transformation, and proposes a unique metric that combines the fidelity measures and classify explanations based on their quality of being sufficient or necessary.
EiX-GNN : Concept-level eigencentrality explainer for graph neural networks
TLDR
A reliable social-aware explaining method suited for graph neural network that includes this social feature as a modular concept generator and by both leveraging signal and graph domain aspect thanks to an eigencentrality concept ordering approach is proposed.
...
...

References

SHOWING 1-10 OF 68 REFERENCES
Evaluating Recurrent Neural Network Explanations
TLDR
Using the method that performed best in the authors' experiments, it is shown how specific linguistic phenomena such as the negation in sentiment analysis reflect in terms of relevance patterns, and how the relevance visualization can help to understand the misclassification of individual samples.
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
TLDR
This work introduces a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges and uses this technique as an attribution method to analyze GNN models for two tasks -- question answering and semantic role labeling -- providing insights into the information flow in these models.
XGNN: Towards Model-Level Explanations of Graph Neural Networks
TLDR
This work proposes a novel approach, known as XGNN, to interpret GNNs at the model-level by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.
Open Graph Benchmark: Datasets for Machine Learning on Graphs
TLDR
The OGB datasets are large-scale, encompass multiple important graph ML tasks, and cover a diverse range of domains, ranging from social and information networks to biological networks, molecular graphs, source code ASTs, and knowledge graphs, indicating fruitful opportunities for future research.
Toward Interpretable Machine Learning: Transparent Deep Neural Networks and Beyond
TLDR
This work aims to provide a timely overview of this active emerging field of machine learning and explain its theoretical foundations, put interpretability algorithms to a test both from a theory and comparative evaluation perspective using extensive simulations, and outline best practice aspects.
Building and Interpreting Deep Similarity Models
TLDR
This paper develops BiLRP, a scalable and theoretically founded method to systematically decompose the output of an already trained deep similarity model on pairs of input features and demonstrates that it robustly explains complex similarity models, e.g., built on VGG-16 deep neural network features.
Explaining Explanations: Axiomatic Feature Interactions for Deep Networks
TLDR
This work presents Integrated Hessians, an extension of Integrated Gradients that explains pairwise feature interactions in neural networks and finds that the method is faster than existing methods when the number of features is large, and outperforms previous methods on existing quantitative benchmarks.
Learning Global Pairwise Interactions with Bayesian Neural Networks
TLDR
This work proposes an intuitive global interaction measure: Bayesian Group Expected Hessian (GEH), which aggregates information of local interactions as captured by the Hessian, and demonstrates its ability to detect interpretable interactions between higher-level features (at deeper layers of the neural network).
Exploring chemical compound space with quantum-based machine learning
TLDR
It is argued that significant progress in the exploration and understanding of chemical compound space can be made through a systematic combination of rigorous physical theories, comprehensive synthetic data sets of microscopic and macroscopic properties, and modern machine-learning methods that account for physical and chemical knowledge.
...
...