Corpus ID: 235421912

Training Graph Neural Networks with 1000 Layers

@inproceedings{Li2021TrainingGN,
  title={Training Graph Neural Networks with 1000 Layers},
  author={Guohao Li and Matthias M{\"u}ller and Bernard Ghanem and V. Koltun},
  booktitle={ICML},
  year={2021}
}
Deep graph neural networks (GNNs) have achieved excellent results on various tasks on increasingly large graph datasets with millions of nodes and edges. However, memory complexity has become a major obstacle when training deep GNNs for practical applications due to the immense number of nodes, edges, and intermediate activations. To improve the scalability of GNNs, prior works propose smart graph sampling or partitioning strategies to train GNNs with a smaller set of nodes or sub-graphs. In… Expand
2 Citations

Figures and Tables from this paper

Evaluating Deep Graph Neural Networks
TLDR
The first systematic experimental evaluation is conducted to present the fundamental limitations of shallow architectures and presents Deep Graph Multi-Layer Perceptron (DGMLP), a powerful approach (a paradigm in its own right) that helps guide deep GNN designs. Expand
Bridging the Gap between Spatial and Spectral Domains: A Unified Framework for Graph Neural Networks
  • Zhiqian Chen, Fanglan Chen, +7 authors Chang-Tien Lu
  • Computer Science
  • 2021
ZHIQIAN CHEN, Dept. of Computer Science and Engineering, Mississippi State University, U.S.A FANGLAN CHEN, Dept. of Computer Science, Virginia Tech, U.S.A LEI ZHANG, Dept. of Computer Science,Expand

References

SHOWING 1-10 OF 78 REFERENCES
Scaling Graph Neural Networks with Approximate PageRank
TLDR
The PPRGo model is presented, which utilizes an efficient approximation of information diffusion in GNNs resulting in significant speed gains while maintaining state-of-the-art prediction performance, and the practical application of PPR go to solve large-scale node classification problems at Google. Expand
How Powerful are Graph Neural Networks?
TLDR
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs. Expand
Pitfalls of Graph Neural Network Evaluation
TLDR
This paper performs a thorough empirical evaluation of four prominent GNN models and suggests that simpler GNN architectures are able to outperform the more sophisticated ones if the hyperparameters and the training procedure are tuned fairly for all models. Expand
DeeperGCN: All You Need to Train Deeper GCNs
TLDR
Extensive experiments on Open Graph Benchmark show DeeperGCN significantly boosts performance over the state-of-the-art on the large scale graph learning tasks of node property prediction and graph property prediction. Expand
Simple and Deep Graph Convolutional Networks
TLDR
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing. Expand
Large-Scale Learnable Graph Convolutional Networks
TLDR
The proposed LGCL automatically selects a fixed number of neighboring nodes for each feature based on value ranking in order to transform graph data into grid-like structures in 1-D format, thereby enabling the use of regular convolutional operations on generic graphs. Expand
DropEdge: Towards Deep Graph Convolutional Networks on Node Classification
TLDR
DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance and consistently improves the performance on a variety of both shallow and deep GCNs. Expand
Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks
TLDR
Cluster-GCN is proposed, a novel GCN algorithm that is suitable for SGD-based training by exploiting the graph clustering structure and allows us to train much deeper GCN without much time and memory overhead, which leads to improved prediction accuracy. Expand
Optimization of Graph Neural Networks: Implicit Acceleration by Skip Connections and More Depth
TLDR
This work analyzes linearized GNNs and proves that despite the non-convexity of training, convergence to a global minimum at a linear rate is guaranteed under mild assumptions that are validated on real-world graphs. Expand
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of priorExpand
...
1
2
3
4
5
...