How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision

@article{Kim2022HowTF,
  title={How to Find Your Friendly Neighborhood: Graph Attention Design with Self-Supervision},
  author={Dongkwan Kim and Alice H. Oh},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.04879}
}
Attention mechanism in graph neural networks is designed to assign larger weights to important neighbor nodes for better representation. However, what graph attention learns is not understood well, particularly when graphs are noisy. In this paper, we propose a self-supervised graph attention network (SuperGAT), an improved graph attention model for noisy graphs. Specifically, we exploit two attention forms compatible with a self-supervised task to predict edges, whose presence and absence… 

Causally-guided Regularization of Graph Attention Improves Generalizability

This work introduces CAR, a general-purpose regularization framework for graph attention networks that aligns the attention mechanism with the causal effects of active interventions on graph connectivity in a scalable manner and enhances interpretability of attention weights by accentuating node-neighbor relations that point to causal hypotheses.

Sparse Graph Attention Networks

  • Yang YeShihao Ji
  • Computer Science
    IEEE Transactions on Knowledge and Data Engineering
  • 2021
This paper proposes Sparse Graph Attention Networks (SGATs) that learn sparse attention coefficients under an $L_0$-norm regularization, and the learned sparse attentions are then used for all GNN layers, resulting in an edge-sparsified graph, the first graph learning algorithm that sparsifies graphs for the purpose of identifying important relationship between nodes and for robust training.

How Attentive are Graph Attention Networks?

It is shown that GATs can only compute a restricted kind of attention where the ranking of attended nodes is unconditioned on the query node, and a simple fix is introduced by modifying the order of operations and proposed GATv2: a dynamic graph attention variant that is strictly more expressive than GAT.

CoRGi: Content-Rich Graph Neural Networks with Attention

CoRGi, a GNN that considers the rich data within nodes in the context of their neighbors by endowing CoRGi's message passing with a personalized attention mechanism over the content of each node, is presented.

Not all edges are peers: Accurate structure-aware graph pooling networks.

GLAM: Graph Learning by Modeling Affinity to Labeled Nodes for Graph Neural Networks

A semi-supervised graph learning method for cases when there are no graphs available, which learns a graph as a convex combination of the unsupervised kNN graph and a supervised label-affinity graph.

Learning heterophilious edge to drop: A general framework for boosting graph neural networks

This work proposes a structure learning method called LHE to identify heterophilious edges to drop and shows remarkable performance improvement of GNNs with LHE on multiple datasets across full spectrum of homophily level.

A Component-level Attention based Adaptive Graph Convolutional Network

A novel Component-level Attention Adaptive Graph Convolutional Network to collect different information more efficiently by introducing a component-level attention mechanism and optimizing the input function during the attention learning process and preprocess the node features.

A Robust graph attention network with dynamic adjusted Graph

Robust GAT(RoGAT) is proposed in this paper to improve the robustness of GAT based on the revision of the attention mechanism and outperforms most of the recent defensive methods.

Exploring Edge Disentanglement for Node Classification

This work proposes three heuristics and design three corresponding pretext tasks to guide the automatic edge disentanglement, and shows that it can achieve significant performance gains.
...

References

SHOWING 1-10 OF 69 REFERENCES

Understanding Attention and Generalization in Graph Neural Networks

This work proposes an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets.

Improving Graph Attention Networks with Large Margin-based Constraints

This work first theoretically demonstrate the over-smoothing behavior of GATs and then develops an approach using constraint on the attention weights according to the class boundary and feature aggregation pattern, which leads to significant improvements over the previous state-of-the-art graph attention methods on all datasets.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

Robust Graph Representation Learning via Neural Sparsification

This paper presents NeuralSparse, a supervised graph sparsification technique that improves generalization power by learning to remove potentially task-irrelevant edges from input graphs and takes both structural and non-structural information as input.

Graph Representation Learning via Hard and Channel-Wise Attention Networks

Compared to GAO, hGAO improves performance and saves computational cost by only attending to important nodes, and Efficiency comparison shows that the cGAO leads to dramatic savings in computational resources, making them applicable to large graphs.

Attention-based Graph Neural Network for Semi-supervised Learning

A novel graph neural network is proposed that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph, and demonstrates that this approach outperforms competing methods on benchmark citation networks datasets.

Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking

Graph2Gauss is proposed - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification and the benefits of modeling uncertainty are demonstrated.

Graph Agreement Models for Semi-Supervised Learning

This work proposes Graph Agreement Models (GAM), which introduces an auxiliary model that predicts the probability of two nodes sharing the same label as a learned function of their features, and achieves state-of-the-art results on semi-supervised learning datasets.

node2vec: Scalable Feature Learning for Networks

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.
...