Directional Self-supervised Learning for Heavy Image Augmentations
@article{Bai2021DirectionalSL, title={Directional Self-supervised Learning for Heavy Image Augmentations}, author={Yalong Bai and Yifan Yang and Wei Zhang and Tao Mei}, journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021}, pages={16671-16680} }
Despite the large augmentation family, only a few cherry-picked robust augmentation policies are beneficial to self-supervised image representation learning. In this paper, we propose a directional self-supervised learning paradigm (DSSL), which is compatible with significantly more augmentations. Specifically, we adapt heavy augmentation policies after the views lightly augmented by standard augmentations, to generate harder view (HV). HV usually has a higher deviation from the original image…
Figures and Tables from this paper
2 Citations
Hierarchical Consistent Contrastive Learning for Skeleton-Based Action Recognition with Growing Augmentations
- Computer ScienceArXiv
- 2022
This paper designs a gradual growing augmentation policy to generate multiple ordered positive pairs and proposes a general hierarchical consistent contrastive learning framework (HiCLR) for skeleton-based action recognition, which outperforms the state-of-the-art methods notably on three large-scale datasets.
Augmentation Pathways Network for Visual Recognition
- Computer ScienceIEEE Transactions on Pattern Analysis and Machine Intelligence
- 2023
A novel network design, noted as Augmentation Pathways (AP), is introduced to systematically stabilize training on a much wider range of augmentation policies, and is extended to high-order versions for high- order scenarios, demonstrating its robustness and flexibility in practical usage.
References
SHOWING 1-10 OF 32 REFERENCES
Contrastive Learning with Stronger Augmentations
- Computer ScienceIEEE transactions on pattern analysis and machine intelligence
- 2022
A general framework called Contrastive Learning with Stronger Augmentations (CLSA) to complement current contrastive learning approaches, where the distribution divergence between the weakly and strongly augmented images over the representation bank is adopted to supervise the retrieval of strongly augmented queries from a pool of instances.
UniformAugment: A Search-free Probabilistic Data Augmentation Approach
- Computer ScienceArXiv
- 2020
This paper shows that, under the assumption that the augmentation space is approximately distribution invariant, a uniform sampling over the continuous space of augmentation transformations is sufficient to train highly effective models and proposes UniformAugment, an automated data augmentation approach that completely avoids a search phase.
What makes for good views for contrastive learning
- Computer ScienceNeurIPS
- 2020
This paper uses empirical analysis to better understand the importance of view selection, and argues that the mutual information (MI) between views should be reduced while keeping task-relevant information intact, and devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI.
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
- Computer ScienceNeurIPS
- 2020
This paper proposes an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons, and uses a swapped prediction mechanism where it predicts the cluster assignment of a view from the representation of another view.
AutoAugment: Learning Augmentation Strategies From Data
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).
A Simple Framework for Contrastive Learning of Visual Representations
- Computer ScienceICML
- 2020
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.
CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
- Computer Science2019 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2019
Patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches, and CutMix consistently outperforms state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on ImageNet weakly-supervised localization task.
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
- Computer ScienceNeurIPS
- 2020
This work introduces Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning that performs on par or better than the current state of the art on both transfer and semi- supervised benchmarks.
On Mutual Information Maximization for Representation Learning
- Computer ScienceICLR
- 2020
This paper argues, and provides empirical evidence, that the success of these methods cannot be attributed to the properties of MI alone, and that they strongly depend on the inductive bias in both the choice of feature extractor architectures and the parametrization of the employed MI estimators.
Joint Contrastive Learning with Infinite Possibilities
- Computer ScienceNeurIPS
- 2020
It is demonstrated that the proposed formulation of Joint Contrastive Learning harbors an innate agency that strongly favors similarity within each instance-specific class, and therefore remains advantageous when searching for discriminative features among distinct instances.