Corpus ID: 235430960

An Empirical Study of Graph Contrastive Learning

@article{Zhu2021AnES,
  title={An Empirical Study of Graph Contrastive Learning},
  author={Yanqiao Zhu and Yichen Xu and Qiang Liu and Shu Wu},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.01116}
}
Graph Contrastive Learning (GCL) establishes a new paradigm for learning graph representations without human annotations. Although remarkable progress has been witnessed recently, the success behind GCL is still left somewhat mysterious. In this work, we first identify several critical design considerations within a general GCL paradigm, including augmentation functions, contrasting modes, contrastive objectives, and negative mining techniques. Then, to understand the interplay of different GCL… Expand

Figures and Tables from this paper

Debiased Graph Contrastive Learning
TLDR
DGCL is proposed, a novel and effective method to estimate the probability whether each negative sample is true or not and outperforms or matches previous unsupervised state-of-the-art results on several benchmarks and even exceeds the performance of supervised ones. Expand
Latent Structures Mining with Contrastive Modality Fusion for Multimedia Recommendation
TLDR
In the proposed MICRO model, a novel modality-aware structure learning module is devised, which learns item-item relationships for each modality, and a novel multi-modal contrastive framework is designed to facilitate fine-grained multimodal fusion. Expand
Joint Embedding of Structural and Functional Brain Networks with Graph Neural Networks for Mental Illness Diagnosis
TLDR
This work develops a novel multiview GNN model which takes advantage of the message passing scheme by propagating messages based on degree statistics and brain region connectivities and employs contrastive learning for multimodal fusion. Expand

References

SHOWING 1-10 OF 122 REFERENCES
Pitfalls of Graph Neural Network Evaluation
TLDR
This paper performs a thorough empirical evaluation of four prominent GNN models and suggests that simpler GNN architectures are able to outperform the more sophisticated ones if the hyperparameters and the training procedure are tuned fairly for all models. Expand
SimCSE: Simple Contrastive Learning of Sentence Embeddings
TLDR
This paper describes an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise, and shows that contrastive learning theoretically regularizes pretrained embeddings’ anisotropic space to be more uniform and it better aligns positive pairs when supervised signals are available. Expand
Self-supervised Graph Neural Networks without explicit negative sampling
TLDR
This study proposes, SelfGNN, a novel contrastive selfsupervised graph neural network (GNN) without relying on explicit contrastive terms, and leverages Batch Normalization, which introduces implicit contrastive Terms, without sacrificing performance. Expand
Contrastive Multi-View Representation Learning on Graphs
We introduce a self-supervised approach for learning node and graph level representations by contrasting structural views of graphs. We show that unlike visual representation learning, increasing theExpand
Graph Barlow Twins: A self-supervised representation learning framework for graphs
TLDR
This work proposes a framework for self-supervised graph representation learning – Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples and does not rely on non-symmetric neural network architectures – in contrast to state-of-the-art self- Supervisedgraph representation learning method BGRL. Expand
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training
TLDR
Graph Contrastive Coding (GCC) is designed --- a self-supervised graph neural network pre-training framework --- to capture the universal network topological properties across multiple networks and leverage contrastive learning to empower graph neural networks to learn the intrinsic and transferable structural representations. Expand
Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning
TLDR
It is shown that the graph convolution of the GCN model is actually a special form of Laplacian smoothing, which is the key reason why GCNs work, but it also brings potential concerns of over-smoothing with many convolutional layers. Expand
Bootstrapped Representation Learning on Graphs
TLDR
This work presents Bootstrapped Graph Latents, BGRL, a self-supervised graph representation method that outperforms or matches the previous unsupervised state-ofthe-art results on several established benchmark datasets and enables the effective usage of graph attentional (GAT) encoders, allowing us to further improve the state of the art. Expand
Exploring Simple Siamese Representation Learning
  • Xinlei Chen, Kaiming He
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Expand
Simple Spectral Graph Convolution
TLDR
This paper uses a modified Markov Diffusion Kernel to derive a variant of GCN called Simple Spectral Graph Convolution (S2GC), and spectral analysis shows that the simple spectral graph convolution used in S2GC is a trade-off of low and high-pass filter bands which capture the global and local contexts of each node. Expand
...
1
2
3
4
5
...