Corpus ID: 235353052

Bag of Tricks for Node Classification with Graph Neural Networks

@inproceedings{Wang2021BagOT,
  title={Bag of Tricks for Node Classification with Graph Neural Networks},
  author={Yangkun Wang and Jiarui Jin and Weinan Zhang and Yong Yu and Zheng Zhang and David P. Wipf},
  year={2021}
}
Over the past few years, graph neural networks (GNN) and label propagation-based methods have made significant progress in addressing node classification tasks on graphs. However, in addition to their reliance on elaborate architectures and algorithms, there are several key technical details that are frequently overlooked, and yet nonetheless can play a vital role in achieving satisfactory performance. In this paper, we first summarize a series of existing tricks-of-the-trade, and then propose… Expand

Figures and Tables from this paper

LARGE-SCALE NODE CLASSIFICATION WITH BOOTSTRAPPING
Effectively and efficiently deploying graph neural networks (GNNs) at scale remains one of the most challenging aspects of graph representation learning. Many powerful solutions have only ever beenExpand
Convergent Boosted Smoothing for Modeling Graph Data with Tabular Node Features
  • Jiuhai Chen, Jonas Mueller, +4 authors David Wipf
  • Computer Science
  • 2021
For supervised learning with tabular data, decision tree ensembles produced via boosting techniques generally dominate real-world applications involving iid training/test sets. However for graph dataExpand
Graph Attention Multi-Layer Perceptron
TLDR
Following the routine of decoupled GNNs, the feature propagation in GAMLP is executed during pre-computation, which helps it maintain high scalability and efficiency. Expand
Large-scale graph representation learning with very deep GNNs and self-supervision
TLDR
This work entered the OGB-LSC with two large-scale GNNs: a deep transductive node classifier powered by bootstrapping, and a very deep (up to 50-layer) inductive graph regressor regularised by denoising objectives, and achieved an award-level performance on both the MAG240M and PCQM4M benchmarks. Expand

References

SHOWING 1-10 OF 35 REFERENCES
Combining Label Propagation and Simple Models Out-performs Graph Neural Networks
TLDR
This work shows that for many standard transductive node classification benchmarks, it can exceed or match the performance of state-of-the-art GNNs by combining shallow models that ignore the graph structure with two simple post-processing steps that exploit correlation in the label structure. Expand
Predict then Propagate: Graph Neural Networks meet Personalized PageRank
TLDR
This paper uses the relationship between graph convolutional networks (GCN) and PageRank to derive an improved propagation scheme based on personalized PageRank, and constructs a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP. Expand
OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs
TLDR
It is shown that expressive models significantly outperform simple scalable baselines, indicating an opportunity for dedicated efforts to further improve graph ML at scale. Expand
Graph Convolutional Neural Networks for Web-Scale Recommender Systems
TLDR
A novel method based on highly efficient random walks to structure the convolutions and a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model are developed. Expand
Adaptive Graph Diffusion Networks with Hop-wise Attention
TLDR
This work proposes Adaptive Graph Diffusion Networks with Hop-wise Attention (AGDNs-HA), which stacks multi-hop neighborhood aggregations of different orders into single layer with the help of hop-wise attention, which is learnable and adaptive for each node. Expand
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
TLDR
DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance and consistently improves the performance on a variety of both shallow and deep GCNs. Expand
FLAG: Adversarial Data Augmentation for Graph Neural Networks
TLDR
This work proposes a simple but effective solution, FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training, and boosts performance at test time. Expand
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning
TLDR
It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Expand
Inductive Representation Learning on Large Graphs
TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks. Expand
node2vec: Scalable Feature Learning for Networks
TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods. Expand
...
1
2
3
4
...