Corpus ID: 236134328

ReSSL: Relational Self-Supervised Learning with Weak Augmentation

@article{Zheng2021ReSSLRS,
  title={ReSSL: Relational Self-Supervised Learning with Weak Augmentation},
  author={Mingkai Zheng and Shan You and Fei Wang and Chen Qian and Changshui Zhang and Xiaogang Wang and Chang Xu},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.09282}
}
  • Mingkai Zheng, Shan You, +4 authors Chang Xu
  • Published 20 July 2021
  • Computer Science
  • ArXiv
Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations. However, most of methods mainly focus on the instance level information (i.e., the different augmented images of the same instance should have the same feature or cluster into the same class), but there is a lack of attention on the relationships between different instances. In this paper, we introduced a novel SSL paradigm, which… Expand
Weakly Supervised Contrastive Learning
  • Mingkai Zheng, Fei Wang, +4 authors Chang Xu
  • Computer Science
  • ArXiv
  • 2021
TLDR
This work introduced a weakly supervised contrastive learning framework (WCL) based on two projection heads, one of which will perform the regular instance discrimination task and the other will use a graphbased method to explore similar samples and generate a weak label to pull the similar images closer. Expand
Learning with Privileged Tasks
  • Yuru Song, Zan Lou, +5 authors Xiaogang Wang
Multi-objective multi-task learning aims to boost the performance of all tasks by leveraging their correlation and conflict appropriately. Nevertheless, in real practice, users may have preferenceExpand
Solo-learn: A Library of Self-supervised Methods for Visual Representation Learning
TLDR
The goal is to provide an easy-touse library comprising a large amount of Self-supervised Learning (SSL) methods, that can be easily extended and fine-tuned by the community. Expand

References

SHOWING 1-10 OF 52 REFERENCES
Local Aggregation for Unsupervised Learning of Visual Embeddings
TLDR
This work describes a method that trains an embedding function to maximize a metric of local aggregation, causing similar data instances to move together in the embedding space, while allowing dissimilar instances to separate. Expand
Unsupervised Feature Learning via Non-parametric Instance Discrimination
TLDR
This work forms this intuition as a non-parametric classification problem at the instance-level, and uses noise-contrastive estimation to tackle the computational challenges imposed by the large number of instance classes. Expand
Large Scale Adversarial Representation Learning
TLDR
This work builds upon the state-of-the-art BigGAN model, extending it to representation learning by adding an encoder and modifying the discriminator, and demonstrates that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation. Expand
Exploring Simple Siamese Representation Learning
  • Xinlei Chen, Kaiming He
  • Computer Science
  • 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
TLDR
Surprising empirical results are reported that simple Siamese networks can learn meaningful representations even using none of the following: (i) negative sample pairs, (ii) large batches, (iii) momentum encoders. Expand
Self-labelling via simultaneous clustering and representation learning
TLDR
The proposed novel and principled learning formulation is able to self-label visual data so as to train highly competitive image representations without manual labels and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline. Expand
NormFace: L2 Hypersphere Embedding for Face Verification
TLDR
This work identifies and study four issues related to normalization through mathematical analysis, which yields understanding and helps with parameter settings, and proposes two strategies for training using normalized features. Expand
Unsupervised Visual Representation Learning by Context Prediction
TLDR
It is demonstrated that the feature representation learned using this within-image context indeed captures visual similarity across images and allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Expand
A Theoretical Analysis of Contrastive Unsupervised Representation Learning
TLDR
This framework allows us to show provable guarantees on the performance of the learned representations on the average classification task that is comprised of a subset of the same set of latent classes and shows that learned representations can reduce (labeled) sample complexity on downstream tasks. Expand
Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles
TLDR
A novel unsupervised learning approach to build features suitable for object detection and classification and to facilitate the transfer of features to other tasks, the context-free network (CFN), a siamese-ennead convolutional neural network is introduced. Expand
Improved Baselines with Momentum Contrastive Learning
TLDR
With simple modifications to MoCo, this note establishes stronger baselines that outperform SimCLR and do not require large training batches, and hopes this will make state-of-the-art unsupervised learning research more accessible. Expand
...
1
2
3
4
5
...