Equivariant Contrastive Learning
@article{Dangovski2021EquivariantCL, title={Equivariant Contrastive Learning}, author={Rumen Dangovski and Li Jing and Charlotte Loh and Seung-Jun Han and Akash Srivastava and Brian Cheung and Pulkit Agrawal and Marin Solja{\vc}i{\'c}}, journal={ArXiv}, year={2021}, volume={abs/2111.00899} }
a Equivariant Self-Supervised Learning (E-SSL). pre-training objective equivariance by the transformations applied to the input. We effectiveness empirically on several popular benchmarks, improving SimCLR to linear probe accuracy on we
Figures and Tables from this paper
17 Citations
EquiMod: An Equivariance Module to Improve Self-Supervised Learning
- Computer ScienceArXiv
- 2022
EquiMod is introduced, a generic equivariance module that structures the learned latent space, in the sense that that module learns to predict the displacement in the embedding space caused by the augmentations, and it is shown that applying that module to state-of-the-art invariance models, such as SimCLR and BYOL, increases the performances on CIFAR10 and ImageNet datasets.
Equivariant Self-Supervision for Musical Tempo Estimation
- Computer ScienceArXiv
- 2022
This work derives a simple loss function that prevents the network from collapsing on a trivial solution during training, without requiring any form of regularisation or negative sampling, and shows it is possible to learn meaningful representations for tempo estimation by solely relying on equivariant self-supervision.
Invariance-adapted decomposition and Lasso-type contrastive learning
- Computer ScienceArXiv
- 2022
The notion of invariance-adapted latent space that decomposes the data space into the intersections of the invariant spaces of each augmentation and their complements is introduced and describes a structure that is analogous to the frequencies in the harmonic analysis of a group.
Improving Fine-tuning of Self-supervised Models with Contrastive Initialization
- Computer ScienceNeural networks : the official journal of the International Neural Network Society
- 2022
This work proposes a Contrastive Initialization (COIN) method, which exploits a supervised contrastive loss to increase inter-class discrepancy and intra-class compactness of features on the target dataset and can be easily trained to discriminate instances of different classes during the final fine-tuning stage.
Differentiable Data Augmentation for Contrastive Sentence Representation Learning
- Computer ScienceArXiv
- 2022
This work proposes a method that makes hard positives from the original training examples that is more label-efficient than the state-of-the-art contrastive learning methods.
Learning Equivariant Segmentation with Instance-Unique Querying
- Computer ScienceArXiv
- 2022
A new training framework that boosts query-based models through discriminative query embedding learning, and encourages both image (instance) representations and queries to be equivariant against geometric transformations, leading to more robust, instance-query matching.
Uncertainty-Guided Pixel Contrastive Learning for Semi-Supervised Medical Image Segmentation
- Computer ScienceIJCAI
- 2022
This work proposes a novel uncertainty-guided pixel contrastive learning method for semi-supervised medical image segmentation and proposes that the effective global representations learned by an image encoder should be equivariant to different geometric transformations.
Understanding Masked Image Modeling via Learning Occlusion Invariant Feature
- Computer ScienceArXiv
- 2022
A new viewpoint is proposed: MIM implicitly learns occlusion-invariant features, which is analogous to other siamese methods while the latter learns other invariance, which could inspire researchers to develop more powerful self-supervised methods in computer vision community.
DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings
- Computer ScienceNAACL
- 2022
This work proposes DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, and shows that DiffSCE is an instance of equivariant Contrastive learning, which generalizes contrastivelearning and learns representations that are insensitive to certain types of augmentations and sensitive to other “harmful” types of augations.
Towards Self-Supervised Gaze Estimation
- Computer ScienceArXiv
- 2022
This work proposed a novel approach (SwAT) to learn an equivariant representation for geometric transformations, i.e., rotations and horizontal flip in gaze estimation, and showed that SwAT learns more informative representations than other pretraining schemes for the task of gaze estimation.
References
SHOWING 1-10 OF 53 REFERENCES
Equivariance and Invariance for Robust Unsupervised and Semi-Supervised Learning
- Computer Science
- 2020
Experiments show the proposed methods outperform many state-of-the-art approaches on unsupervised and semi-supervised learning, proving importance of the equivariance and invariance rules for robust feature representation learning.
Group Equivariant Convolutional Networks
- Computer ScienceICML
- 2016
Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries and achieves state of the art results on CI- FAR10 and rotated MNIST.
What Should Not Be Contrastive in Contrastive Learning
- Computer ScienceICLR
- 2021
This work introduces a contrastive learning framework which does not require prior knowledge of specific, task-dependent invariances, and learns to capture varying and invariant factors for visual representations by constructing separate embedding spaces, each of which is invariant to all but one augmentation.
Towards Domain-Agnostic Contrastive Learning
- Computer ScienceICML
- 2021
This work proposes a novel domain-agnostic approach to contrastive learning, named DACL, that is applicable to domains where invariances, and thus, data augmentation techniques, are not readily available and combines well with domain-specific methods, such as SimCLR, to improve self-supervised visual representation learning.
Improving Transformation Invariance in Contrastive Representation Learning
- Computer ScienceICLR
- 2021
A training objective for contrastive learning that uses a novel regularizer to control how the representation changes under transformation is introduced, and a change to how test time representations are generated is proposed by introducing a feature averaging approach that combines encodings from multiple transformations of the original input.
A Simple Framework for Contrastive Learning of Visual Representations
- Computer ScienceICML
- 2020
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.
i-Mix: A Domain-Agnostic Strategy for Contrastive Representation Learning
- Computer ScienceICLR
- 2021
i-Mix is proposed, a simple yet effective domain-agnostic regularization strategy for improving contrastive representation learning that consistently improves the quality of learned representations across domains, including image, speech, and tabular data.
Momentum Contrast for Unsupervised Visual Representation Learning
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a…
What makes for good views for contrastive learning
- Computer ScienceNeurIPS
- 2020
This paper uses empirical analysis to better understand the importance of view selection, and argues that the mutual information (MI) between views should be reduced while keeping task-relevant information intact, and devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI.
Emerging Properties in Self-Supervised Vision Transformers
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
This paper questions if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets) and implements DINO, a form of self-distillation with no labels, which implements the synergy between DINO and ViTs.