• Corpus ID: 249494457

# Deeper-GXX: Deepening Arbitrary GNNs

@inproceedings{Zheng2021DeeperGXXDA,
title={Deeper-GXX: Deepening Arbitrary GNNs},
author={Lecheng Zheng and Dongqi Fu and Ross Maciejewski and Jingrui He},
year={2021}
}
• Published 26 October 2021
• Computer Science
Graph neural networks (GNNs) have proven successful at modeling graph data. However, shallow GNNs tend to have sub-optimal performance, e.g., dealing with large graphs with missing features. Therefore, it is necessary to increase the number of layers of GNNs to capture more latent knowledge of the input data. Nevertheless, stacking more layers in GNNs typically decreases their performance due to, e.g., vanishing gradient and oversmoothing. Existing deep GNN solutions mainly focus on addressing…
1 Citations

## Figures and Tables from this paper

• Computer Science
IEEE transactions on pattern analysis and machine intelligence
• 2022
This work presents the first fair and reproducible benchmark dedicated to assessing the tricks of training deep GNNs and demonstrates that an organic combo of initial connection, identity mapping, group, and batch normalization attains the new state-of-the-art results for deep Gnns on large datasets.
• Computer Science
CIKM
• 2022
An end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs with diverse structures and disparate feature spaces is proposed and new light is shed on the problem of domain adaption on relational data by deriving a natural and interpretable upper bound on the generalization error of thePre-trained Gnns.
• Computer Science
Proceedings of the 31st ACM International Conference on Information &amp; Knowledge Management
• 2022
A novel multiplex heterogeneous graph prototypical contrastive leaning (X-GOAL) framework to extract node embeddings is proposed, comprised of two components: the GOAL framework, which learns nodeembeddings for each homogeneous graph layer, and an alignment regularization, which jointly models different layers by aligning layer-specific node embeds.
• Computer Science
WWW
• 2022
In this survey, an in-depth review GAL techniques from macro (graph), meso (subgraph), and micro (node/edge) levels are reviewed and insights on several open issues of GAL are shared, including heterogeneity, spatio-temporal dynamics, scalability, and generalization.
• Computer Science
KDD
• 2022
A unified heterogeneous learning framework is proposed, which combines both the weighted unsupervised contrastive loss and the weighted supervised Contrastive loss to model multiple types of heterogeneity.
• Computer Science
Frontiers in Big Data
• 2022
The definitions of natural dynamics and artificial dynamics in graphs are introduced, and the related works of natural andificial dynamics about how they boost the aforementioned graph research topics are discussed.

## References

SHOWING 1-10 OF 44 REFERENCES

• Computer Science
ICLR
• 2020
PairNorm is a novel normalization layer that is based on a careful analysis of the graph convolution operator, which prevents all node embeddings from becoming too similar and significantly boosts performance for a new problem setting that benefits from deeper GNNs.
• Computer Science
ICLR
• 2020
DropEdge is a general skill that can be equipped with many other backbone models (e.g. GCN, ResGCN, GraphSAGE, and JKNet) for enhanced performance and consistently improves the performance on a variety of both shallow and deep GCNs.
• Computer Science
NeurIPS
• 2020
The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, the GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
• Computer Science
AAAI
• 2018
It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers.
• Computer Science
ICML
• 2020
The GCNII is proposed, an extension of the vanilla GCN model with two simple yet effective techniques: {\em Initial residual} and {\em Identity mapping} that effectively relieves the problem of over-smoothing.
• Computer Science
ICML
• 2019
This paper successively removes nonlinearities and collapsing weight matrices between consecutive layers, and theoretically analyze the resulting linear model and show that it corresponds to a fixed low-pass filter followed by a linear classifier.
• Computer Science
NeurIPS
• 2020
DGN is introduced, which normalizes nodes within the same group independently to increase their smoothness, and separates node distributions among different groups to significantly alleviate the over-smoothing issue.
• Computer Science
AAAI
• 2020
Two methods to alleviate the over-smoothing issue of GNNs are proposed: MADReg which adds a MADGap-based regularizer to the training objective; AdaEdge which optimizes the graph topology based on the model predictions.
• Computer Science
ICLR
• 2020
The theory enables us to relate the expressive power of GCNs with the topological information of the underlying graphs inherent in the graph spectra and provides a principled guideline for weight normalization of graph NNs.
• Computer Science
ICLR
• 2018
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior