• Corpus ID: 236134089

Large-scale graph representation learning with very deep GNNs and self-supervision

@article{Addanki2021LargescaleGR,
  title={Large-scale graph representation learning with very deep GNNs and self-supervision},
  author={Ravichandra Addanki and Peter W. Battaglia and David Budden and Andreea Deac and Jonathan Godwin and Thomas Keck and Wai Lok Sibon Li and Alvaro Sanchez-Gonzalez and Jacklynn Stott and Shantanu Thakoor and Petar Velivckovi'c},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.09422}
}
Effectively and efficiently deploying graph neural networks (GNNs) at scale remains one of the most challenging aspects of graph representation learning. Many powerful solutions have only ever been validated on comparatively small datasets, often with counter-intuitive outcomes—a barrier which has been broken by the Open Graph Benchmark Large-Scale Challenge (OGB-LSC). We entered the OGB-LSC with two large-scale GNNs: a deep transductive node classifier powered by bootstrapping, and a very deep… 

Figures from this paper

Large-Scale Representation Learning on Graphs via Bootstrapping

Bootstrapped Graph Latents is introduced a graph representation learning method that learns by predicting alternative augmentations of the input and is thus scalable by design, and can be scaled up to extremely large graphs with hundreds of millions of nodes in the semi-supervised regime.

On Representation Knowledge Distillation for Graph Neural Networks

Experiments show that G-CRD consistently boosts the performance and robustness of lightweight GNNs, outperforming LSP (and a global structure preserving (GSP) variant of LSP) as well as baselines from 2-D computer vision.

OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs

It is shown that expressive models significantly outperform simple scalable baselines, indicating an opportunity for dedicated efforts to further improve graph ML at scale.

Ordered Subgraph Aggregation Networks

It is shown that increasing subgraph size always increases the expressive power and a better understanding of their limitations is developed by relating them to the established k - WL hierarchy.

Graph Neural Network Training with Data Tiering

The data tiering method not only utilizes the structure of input graph, but also an insight gained from actual GNN training process to achieve a higher prediction result and a new data placement and access strategy to further minimize the CPU-GPU communication overhead.

Affinity-Aware Graph Networks

This paper explores the use of affinity measures as features in graph neural networks, in particular measures arising from random walks, including effective resistance, hitting and commute times, and proposes message passing networks based on these features.

Extreme Acceleration of Graph Neural Network-based Prediction Models for Quantum Chemistry

It is demonstrated that such a co-design approach can reduce the training time of such molecular property prediction models from days to less than less than two hours, opening new possibilities for AI-driven scientific discovery.

Unified 2D and 3D Pre-Training of Molecular Representations

This work proposes a new representation learning method based on a unified 2D and 3D pre-training that achieves state-of-the-art results on 10 tasks, and the average improvement on 2D-only tasks is 8.3%.

MDGNN: M ETAPATH - BASED D ECOUPLED G RAPH N EURAL N ETWORK FOR MAG240M-LSC

This work proposes a metapath-based decoupled graph neural network (MDGNN), which incorporates more effective features to get better results and finally gets an award-level performance on the MAG240M track of OGB-LSC 2022.

Molecule3D: A Benchmark for Predicting 3D Geometries from Molecular Graphs

This work proposes to predict the ground-state 3D geometries from molecular graphs using machine learning methods using density functional theory (DFT), and implements two baseline methods that either predict the pairwise distance between atoms or atom coordinates in 3D space.

References

SHOWING 1-10 OF 65 REFERENCES

OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs

It is shown that expressive models significantly outperform simple scalable baselines, indicating an opportunity for dedicated efforts to further improve graph ML at scale.

Bag of Tricks for Node Classification with Graph Neural Networks

This paper proposes several new tricks-of-the-trade related to label usage, loss function formulation, and model design that can significantly improve various GNN architectures and empirically evaluates their impact on final node classification accuracy.

DropEdge: Towards Deep Graph Convolutional Networks on Node Classification

DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance and consistently improves the performance on a variety of both shallow and deep GCNs.

Pitfalls of Graph Neural Network Evaluation

This paper performs a thorough empirical evaluation of four prominent GNN models and suggests that simpler GNN architectures are able to outperform the more sophisticated ones if the hyperparameters and the training procedure are tuned fairly for all models.

FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling

Enhanced with importance sampling, FastGCN not only is efficient for training but also generalizes well for inference, and is orders of magnitude more efficient while predictions remain comparably accurate.

SIGN: Scalable Inception Graph Neural Networks

This paper proposes a new, efficient and scalable graph deep learning architecture which sidesteps the need for graph sampling by using graph convolutional filters of different size that are amenable to efficient precomputation, allowing extremely fast training and inference.

Graph Convolutional Neural Networks for Web-Scale Recommender Systems

A novel method based on highly efficient random walks to structure the convolutions and a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model are developed.

Deep Graph Contrastive Representation Learning

This paper proposes a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level, and generates two graph views by corruption and learns node representations by maximizing the agreement of node representations in these two views.

Bootstrapped Representation Learning on Graphs

This work presents Bootstrapped Graph Latents, BGRL, a self-supervised graph representation method that outperforms or matches the previous unsupervised state-ofthe-art results on several established benchmark datasets and enables the effective usage of graph attentional (GAT) encoders, allowing us to further improve the state of the art.

GraphSAINT: Graph Sampling Based Inductive Learning Method

GraphSAINT is proposed, a graph sampling based inductive learning method that improves training efficiency in a fundamentally different way and can decouple the sampling process from the forward and backward propagation of training, and extend GraphSAINT with other graph samplers and GCN variants.
...