# ExplaiNE: An Approach for Explaining Network Embedding-based Link Predictions

@article{Kang2019ExplaiNEAA, title={ExplaiNE: An Approach for Explaining Network Embedding-based Link Predictions}, author={Bo Kang and Jefrey Lijffijt and Tijl De Bie}, journal={ArXiv}, year={2019}, volume={abs/1904.12694} }

Networks are powerful data structures, but are challenging to work with for conventional machine learning methods. Network Embedding (NE) methods attempt to resolve this by learning vector representations for the nodes, for subsequent use in downstream machine learning tasks.
Link Prediction (LP) is one such downstream machine learning task that is an important use case and popular benchmark for NE methods. Unfortunately, while NE methods perform exceedingly well at this task, they are lacking…

## Figures and Tables from this paper

## 14 Citations

### GNN Explainer: A Tool for Post-hoc Explanation of Graph Neural Networks

- Computer ScienceArXiv
- 2019

GnExplainer is proposed, a general model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task (node and graph classification, link prediction).

### A Simplified Benchmark for Non-ambiguous Explanations of Knowledge Graph Link Prediction using Relational Graph Convolutional Networks

- Computer ScienceSEMWEB
- 2021

This paper proposes a method, including two datasets, to benchmark explanation methods on the task of explainable link prediction using Graph Neural Networks, and reports the results of state-of-the-art explanation methods for RGCNs.

### CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks

- Computer ScienceAISTATS
- 2022

This paper proposes CF-GNNE XPLAINER, a method for generating counterfactual explanations for GNNs, i.e., the minimal perturbations to the input graph data such that the prediction changes, and primar-ily removes edges that are crucial for the original predictions, resulting in minimalcounterfactual examples.

### Learnt Sparsification for Interpretable Graph Neural Networks

- Computer ScienceArXiv
- 2021

This paper proposes a novel method called KEdge for explicitly sparsification using the Hard Kumaraswamy distribution that can be used in conjugation with any GNN model and effectively counters the over-smoothing phenomena in deep GNNs by maintaining good task performance with increasing GNN layers.

### Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks

- Computer ScienceArXiv
- 2021

A novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity is proposed and introduced as a measure of the explanation’s effectiveness.

### GNNExplainer: Generating Explanations for Graph Neural Networks

- Computer ScienceNeurIPS
- 2019

GnExplainer is proposed, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task.

### Linked Data Ground Truth for Quantitative and Qualitative Evaluation of Explanations for Relational Graph Convolutional Network Link Prediction on Knowledge Graphs

- Computer ScienceWI/IAT
- 2021

This paper relies on the Semantic Web to construct explanations, ensuring that each predictable triple has an associated set of triples providing a ground truth explanation, and proposes the use of a scoring metric for empirically evaluating explanation methods, allowing for a quantitative comparison.

### A Simplified Benchmark for Ambiguous Explanations of Knowledge Graph Link Prediction Using Relational Graph Convolutional Networks (Student Abstract)

- Computer ScienceAAAI
- 2022

This work proposes and evaluates a method, including a dataset, to benchmark explanation methods on the task of explainable link prediction using RGCNs.

### User Scored Evaluation of Non-Unique Explanations for Relational Graph Convolutional Network Link Prediction on Knowledge Graphs

- Computer ScienceK-CAP
- 2021

This paper introduces a method, including a dataset, to benchmark explanation methods on the task of link prediction on KGs, when there are multiple explanations to consider, and proposes the use of several scoring metrics, using relevance weights derived from user scores for each predicted explanation.

### Adversarial Robustness of Probabilistic Network Embedding for Link Prediction

- Computer SciencePKDD/ECML Workshops
- 2021

Adversarial robustness of Conditional Network Embedding (CNE), a state-of-the-art probabilistic network embedding model, for link prediction is studied, to measure the sensitivity of the link predictions of the model to small adversarial perturbations of the network.

## References

SHOWING 1-10 OF 27 REFERENCES

### node2vec: Scalable Feature Learning for Networks

- Computer ScienceKDD
- 2016

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

### A Unified Approach to Interpreting Model Predictions

- Computer ScienceNIPS
- 2017

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

### Representation Learning on Graphs: Methods and Applications

- Computer ScienceIEEE Data Eng. Bull.
- 2017

A conceptual review of key advancements in this area of representation learning on graphs, including matrix factorization-based methods, random-walk based algorithms, and graph neural networks are provided.

### Definitions, methods, and applications in interpretable machine learning

- Computer ScienceProceedings of the National Academy of Sciences
- 2019

This work defines interpretability in the context of machine learning and introduces the predictive, descriptive, relevant (PDR) framework for discussing interpretations, and introduces 3 overarching desiderata for evaluation: predictive accuracy, descriptive accuracy, and relevancy.

### PTE: Predictive Text Embedding through Large-scale Heterogeneous Text Networks

- Computer ScienceKDD
- 2015

A semi-supervised representation learning method for text data, which is called the predictive text embedding (PTE), which is comparable or more effective, much more efficient, and has fewer parameters to tune.

### A Survey of Link Prediction in Complex Networks

- Computer ScienceACM Comput. Surv.
- 2017

This survey will review the general-purpose techniques at the heart of the link prediction problem, which can be complemented by domain-specific heuristic methods in practice.

### LINE: Large-scale Information Network Embedding

- Computer ScienceWWW
- 2015

A novel network embedding method called the ``LINE,'' which is suitable for arbitrary types of information networks: undirected, directed, and/or weighted, and optimizes a carefully designed objective function that preserves both the local and global network structures.

### Conditional Network Embeddings

- Computer ScienceBNAIC/BENELEARN
- 2019

It is demonstrated that CNEs are superior for link prediction and multi-label classification when compared to state-of-the-art methods, and this without adding significant mathematical or computational complexity.

### A Survey on Network Embedding

- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2019

This survey focuses on categorizing and then reviewing the current development on network embedding methods, and point out its future research directions, covering the structure- and property-preserving network embeding methods, the network embedded methods with side information, and the advanced information preserving network embedting methods.

### "Why Should I Trust You?": Explaining the Predictions of Any Classifier

- Computer ScienceHLT-NAACL Demos
- 2016

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.