How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision

@article{Kim2021HowTF,
  title={How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision},
  author={Dongkwan Kim and Alice H. Oh},
  journal={ArXiv},
  year={2021},
  volume={abs/2204.04879}
}
Attention mechanism in graph neural networks is designed to assign larger weights to important neighbor nodes for better representation. However, what graph attention learns is not understood well, particularly when graphs are noisy. In this paper, we propose a self-supervised graph attention network (SuperGAT), an improved graph attention model for noisy graphs. Specifically, we exploit two attention forms compatible with a self-supervised task to predict edges, whose presence and absence… 

Sparse Graph Attention Networks

  • Yang YeShihao Ji
  • Computer Science
    IEEE Transactions on Knowledge and Data Engineering
  • 2021
This paper proposes Sparse Graph Attention Networks (SGATs) that learn sparse attention coefficients under an $L_0$-norm regularization, and the learned sparse attentions are then used for all GNN layers, resulting in an edge-sparsified graph, the first graph learning algorithm that sparsifies graphs for the purpose of identifying important relationship between nodes and for robust training.

How Attentive are Graph Attention Networks?

It is shown that GATs can only compute a restricted kind of attention where the ranking of attended nodes is unconditioned on the query node, and a simple fix is introduced by modifying the order of operations and proposed GATv2: a dynamic graph attention variant that is strictly more expressive than GAT.

CoRGi: Content-Rich Graph Neural Networks with Attention

CoRGi, a GNN that considers the rich data within nodes in the context of their neighbors by endowing CoRGi's message passing with a personalized attention mechanism over the content of each node, is presented.

GLAM: Graph Learning by Modeling Affinity to Labeled Nodes for Graph Neural Networks

A semi-supervised graph learning method for cases when there are no graphs available, which learns a graph as a convex combination of the unsupervised kNN graph and a supervised label-affinity graph.

A Robust graph attention network with dynamic adjusted Graph

Robust GAT(RoGAT) is proposed in this paper to improve the robustness of GAT based on the revision of the attention mechanism and outperforms most of the recent defensive methods.

Exploring Edge Disentanglement for Node Classification

This work proposes three heuristics and design three corresponding pretext tasks to guide the automatic edge disentanglement, and shows that it can achieve significant performance gains.

Towards Robust Graph Neural Networks for Noisy Graphs with Sparse Labels

This work proposes a novel framework which adopts the noisy edges as supervision to learn a denoised and dense graph, which can down-weight or eliminate noisy edges and facilitate message passing of GNNs to alleviate the issue of limited labeled nodes.

Towards Self-Explainable Graph Neural Network

A new framework which can find K-nearest labeled nodes for each unlabeled node to give explainable node classification is proposed, where nearest labeled nodes are found by interpretable similarity module in terms of both node similarity and local structure similarity.

Simplifying Node Classification on Heterophilous Graphs with Compatible Label Propagation

This paper carefully design a combination of a base predictor with LP algorithm that enjoys a closed-form solution as well as convergence guarantees, and shows that this approach achieves the leading performance on graphs with various levels of homophily.

Causal Attention for Interpretable and Generalizable Graph Classification

The Causal Attention Learning (CAL) strategy is proposed, which discovers the causal patterns and mitigates the confounding effect of shortcuts in graph classification by employing attention modules to estimate the causal and shortcut features of the input graph.
...

References

SHOWING 1-10 OF 69 REFERENCES

Understanding Attention and Generalization in Graph Neural Networks

This work proposes an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets.

Improving Graph Attention Networks with Large Margin-based Constraints

This work first theoretically demonstrate the over-smoothing behavior of GATs and then develops an approach using constraint on the attention weights according to the class boundary and feature aggregation pattern, which leads to significant improvements over the previous state-of-the-art graph attention methods on all datasets.

Robust Graph Representation Learning via Neural Sparsification

This paper presents NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs and takes both structural and non-structural information as input.

Graph Representation Learning via Hard and Channel-Wise Attention Networks

Compared to GAO, hGAO improves performance and saves computational cost by only attending to important nodes, and Efficiency comparison shows that the cGAO leads to dramatic savings in computational resources, making them applicable to large graphs.

Attention-based Graph Neural Network for Semi-supervised Learning

A novel graph neural network is proposed that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph, and demonstrates that this approach outperforms competing methods on benchmark citation networks datasets.

Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking

Graph2Gauss is proposed - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification and the benefits of modeling uncertainty are demonstrated.

Graph Agreement Models for Semi-Supervised Learning

This work proposes Graph Agreement Models (GAM), which introduces an auxiliary model that predicts the probability of two nodes sharing the same label as a learned function of their features, and achieves state-of-the-art results on semi-supervised learning datasets.

node2vec: Scalable Feature Learning for Networks

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Diffusion Improves Graph Learning

This work removes the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC), which leverages generalized graph diffusion and alleviates the problem of noisy and often arbitrarily defined edges in real graphs.
...