• Corpus ID: 235430960

An Empirical Study of Graph Contrastive Learning

  title={An Empirical Study of Graph Contrastive Learning},
  author={Yanqiao Zhu and Yichen Xu and Qiang Liu and Shu Wu},
Graph Contrastive Learning (GCL) establishes a new paradigm for learning graph representations without human annotations. Although remarkable progress has been witnessed recently, the success behind GCL is still left somewhat mysterious. In this work, we first identify several critical design considerations within a general GCL paradigm, including augmentation functions, contrasting modes, contrastive objectives, and negative mining techniques. Then, to understand the interplay of different GCL… 

Figures and Tables from this paper

Debiased Graph Contrastive Learning
DGCL is proposed, a novel and effective method to estimate the probability whether each negative sample is true or not and outperforms or matches previous unsupervised state-of-the-art results on several benchmarks and even exceeds the performance of supervised ones.
Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation
In the proposed MICRO model, a novel modality-aware structure learning module is devised, which learns item-item relationships for each modality, and a novel multi-modal contrastive framework is designed to facilitate fine-grained multimodal fusion.
Joint Embedding of Structural and Functional Brain Networks with Graph Neural Networks for Mental Illness Diagnosis
This work develops a novel multiview GNN model which takes advantage of the message passing scheme by propagating messages based on degree statistics and brain region connectivities and employs contrastive learning for multimodal fusion.


Pitfalls of Graph Neural Network Evaluation
This paper performs a thorough empirical evaluation of four prominent GNN models and suggests that simpler GNN architectures are able to outperform the more sophisticated ones if the hyperparameters and the training procedure are tuned fairly for all models.
SimCSE: Simple Contrastive Learning of Sentence Embeddings
This paper describes an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise, and shows that contrastive learning theoretically regularizes pretrained embeddings’ anisotropic space to be more uniform and it better aligns positive pairs when supervised signals are available.
Self-supervised Graph Neural Networks without explicit negative sampling
This study proposes, SelfGNN, a novel contrastive selfsupervised graph neural network (GNN) without relying on explicit contrastive terms, and leverages Batch Normalization, which introduces implicit contrastive Terms, without sacrificing performance.
Contrastive Multi-View Representation Learning on Graphs
We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing the
Graph Barlow Twins: A self-supervised representation learning framework for graphs
This work proposes a framework for self-supervised graph representation learning – Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples and does not rely on non-symmetric neural network architectures – in contrast to state-of-the-art self- Supervisedgraph representation learning method BGRL.
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
Graph Contrastive Coding (GCC) is designed --- a self-supervised graph neural network pre-training framework --- to capture the universal network topological properties across multiple networks and leverage contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations.
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning
It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers.
Bootstrapped Representation Learning on Graphs
This work presents Bootstrapped Graph Latents, BGRL, a self-supervised graph representation method that outperforms or matches the previous unsupervised state-ofthe-art results on several established benchmark datasets and enables the effective usage of graph attentional (GAT) encoders, allowing us to further improve the state of the art.
Exploring Simple Siamese Representation Learning
  • Xinlei Chen, Kaiming He
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders.
Simple Spectral Graph Convolution
This paper uses a modified Markov Diffusion Kernel to derive a variant of GCN called Simple Spectral Graph Convolution (S2GC), and spectral analysis shows that the simple spectral graph convolution used in S2GC is a trade-off of low and high-pass filter bands which capture the global and local contexts of each node.