• Corpus ID: 247158751

Graph Attention Retrospective

@article{Fountoulakis2022GraphAR,
  title={Graph Attention Retrospective},
  author={Kimon Fountoulakis and Amit Levi and Shenghao Yang and Aseem Baranwal and Aukosh Jagannath},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.13060}
}
Graph-based learning is a rapidly growing sub-field of machine learning with applications in social networks, citation networks, and bioinformatics. One of the most popular type of models is graph attention networks. These models were introduced to allow a node to aggregate information from the features of neighbor nodes in a non-uniform way in contrast to simple graph convolution which does not distinguish the neighbors of a node. In this paper, we study theoretically this expected behaviour of… 
1 Citations

Understanding Non-linearity in Graph Neural Networks from the Bayesian-Inference Perspective

TLDR
This work resorts to Bayesian learning to deeply investigate the functions of non-linearity in GNNs for node classi-cation tasks and proves that the superiority of those ReLU activations is only significant when the node attributes are far more informative than the graph structure, which nicely matches many previous empirical observations.

References

SHOWING 1-10 OF 43 REFERENCES

Attention Models in Graphs: A Survey

TLDR
This work conducts a comprehensive and focused survey of the literature on the emerging field of graph attention models and introduces three intuitive taxonomies to group existing work.

Supervised Community Detection with Line Graph Neural Networks

TLDR
This work presents a novel family of Graph Neural Networks (GNNs) for solving community detection problems in a supervised learning setting and shows that, in a data-driven manner and without access to the underlying generative models, they can match or even surpass the performance of the belief propagation algorithm on binary and multi-class stochastic block models.

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Adaptive Universal Generalized PageRank Graph Neural Network

TLDR
This work introduces a new Generalized PageRank (GPR) GNN architecture that adaptively learns the GPR weights so as to jointly optimize node feature and topological information extraction, regardless of the extent to which the node labels are homophilic or heterophilic.

How hard is to distinguish graphs with graph neural networks?

TLDR
This study derives hardness results for the classification variant of graph isomorphism in the message-passing model (MPNN), which encompasses the majority of graph neural networks used today and is universal when nodes are given unique features.

Understanding Attention and Generalization in Graph Neural Networks

TLDR
This work proposes an alternative recipe and train attention in a weakly-supervised fashion that approaches the performance of supervised models, and, compared to unsupervised models, improves results on several synthetic as well as real datasets.

Gated Graph Sequence Neural Networks

TLDR
This work studies feature learning techniques for graph-structured inputs and achieves state-of-the-art performance on a problem from program verification, in which subgraphs need to be matched to abstract data structures.

Graph Attention Networks

We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior

Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs

TLDR
This work identifies a set of key designs -- ego- and neighbor-embedding separation, higher-order neighborhoods, and combination of intermediate representations -- that boost learning from the graph structure under heterophily and combines them into a graph neural network, H2GCN, which is used as the base method to empirically evaluate the effectiveness of the identified designs.

A new model for learning in graph domains

TLDR
A new neural model, called graph neural network (GNN), capable of directly processing graphs, which extends recursive neural networks and can be applied on most of the practically useful kinds of graphs, including directed, undirected, labelled and cyclic graphs.