GraphTTA: Test Time Adaptation on Graph Neural Networks
@article{Chen2022GraphTTATT, title={GraphTTA: Test Time Adaptation on Graph Neural Networks}, author={Guan-Wun Chen and Jiying Zhang and Xiuchuan Xiao and Y. Li}, journal={ArXiv}, year={2022}, volume={abs/2208.09126} }
Recently, test time adaptation (TTA) has attracted increasing attention due to its power of handling the distribution shift issue in the real world. Unlike what has been developed for convolutional neural networks (CNNs) for image data, TTA is less explored for Graph Neural Networks (GNNs). There is still a lack of efficient algorithms tailored for graphs with irregular structures. In this pa-per, we present a novel test time adaptation strategy named Graph Adversarial Pseudo Group Contrast…
2 Citations
Out-Of-Distribution Generalization on Graphs: A Survey
- Computer ScienceArXiv
- 2022
This paper comprehensively survey OOD generalization on graphs and presents a detailed review of recent advances and categorizes existing methods into three classes from conceptually different perspectives, i.e., data, model, and learning strategy.
Preventing Over-Smoothing for Hypergraph Neural Networks
- Computer ScienceArXiv
- 2022
A new deep hypergraph convolutional network called Deep-HGCN is developed, which can maintain the heterogeneity of node representation in deep layers and relieve the problem of over-smoothing.
References
SHOWING 1-10 OF 36 REFERENCES
Graph Contrastive Learning with Augmentations
- Computer ScienceNeurIPS
- 2020
The results show that, even without tuning augmentation extents nor using sophisticated GNN architectures, the GraphCL framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
Adversarial Graph Augmentation to Improve Graph Contrastive Learning
- Computer ScienceNeurIPS
- 2021
A novel principle, termed adversarial-GCL (AD- GCL), is proposed, which enables GNNs to avoid capturing redundant information during the training by optimizing adversarial graph augmentation strategies used in GCL.
Strategies for Pre-training Graph Neural Networks
- Computer ScienceICLR
- 2020
A new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs) that avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction.
Learnable Hypergraph Laplacian for Hypergraph Learning
- Computer ScienceICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2022
This paper proposes the first learning-based method tailored for constructing adaptive hypergraph structure, termed HypERgrAph Laplacian aDaptor (HERALD), which serves as a generic plug-and-play module for improving the representational power of HGCNNs.
MEMO: Test Time Robustness via Adaptation and Augmentation
- Computer ScienceArXiv
- 2021
This work proposes a simple approach that can be used in any test setting where the model is probabilistic and adaptable: when presented with a test example, perform different data augmentations on the data point, and adapt (all of) the model parameters by minimizing the entropy of the model’s average, or marginal, output distribution across the augmentations.
Revisiting Batch Normalization For Practical Domain Adaptation
- Computer ScienceICLR
- 2017
This paper proposes a simple yet powerful remedy, called Adaptive Batch Normalization (AdaBN) to increase the generalization ability of a DNN, and demonstrates that the method is complementary with other existing methods and may further improve model performance.
Deep Graph Contrastive Representation Learning
- Computer ScienceArXiv
- 2020
This paper proposes a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level, and generates two graph views by corruption and learns node representations by maximizing the agreement of node representations in these two views.
Graph Information Bottleneck
- Computer ScienceNeurIPS
- 2020
Graph Information Bottleneck (GIB), an information-theoretic principle that optimally balances expressiveness and robustness of the learned representation of graph-structured data, is introduced and proposed models are more robust than state-of-the-art graph defense models.
SoftEdge: Regularizing Graph Classification with Random Soft Edges
- Computer ScienceArXiv
- 2022
It is proved that SoftEdge creates collision-free augmented graphs, and it is shown that this simple method obtains superior accuracy to popular node and edge manipulation approaches and notable resilience to the accuracy degradation with the GNN depth.
How Powerful are Graph Neural Networks?
- Computer ScienceICLR
- 2019
This work characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures, and develops a simple architecture that is provably the most expressive among the class of GNNs.