Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples

@article{Assran2021SemiSupervisedLO,
  title={Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting View Assignments with Support Samples},
  author={Mahmoud Assran and Mathilde Caron and Ishan Misra and Piotr Bojanowski and Armand Joulin and Nicolas Ballas and Michael G. Rabbat},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={8423-8432}
}
This paper proposes a novel method of learning by predicting view assignments with support samples (PAWS). The method trains a model to minimize a consistency loss, which ensures that different views of the same unlabeled instance are assigned similar pseudo-labels. The pseudo-labels are generated non-parametrically, by comparing the representations of the image views to those of a set of randomly sampled labeled images. The distance between the view representations and labeled representations… 
Towards Discovering the Effectiveness of Moderately Confident Samples for Semi-Supervised Learning
TLDR
This work proposes a novel Taylor expansion inspired filtration framework, which admits the samples of moderate confidence with similar feature or gradient to the respective one averaged over the labeled and highly confident unlabeled data, and can produce a stable and new information induced network update, leading to better generalization.
CS231n - Classifying dogs using PAWS
TLDR
Performance PAWS when on varying ratios images after pre-training on ImageNet, showing that PAWS should be explored further for fine-grain classification problems with sparsely labeled datasets.
GraFN: Semi-Supervised Node Classification on Graph with Few Labels via Non-Parametric Distribution Assignment
TLDR
A novel semi-supervised method for graphs, GraFN, that leverages few labeled nodes to ensure nodes that belong to the same class to be grouped together, thereby achieving the best of both worlds of semi- supervised and self-super supervised methods.
Semi-Supervised Object Detection via Multi-instance Alignment with Global Class Prototypes
TLDR
A Multi-instance Alignment model is proposed which enhances the prediction consistency based on Global Class Prototypes (MA-GCP) and imposes the consistency between pseudo ground-truths and their high-IoU candidates by minimizing the cross-entropy loss of their class distributions computed based on global class prototypes.
Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?
TLDR
RELICv2 is the first representation learning method to consistently outperform the supervised baseline in a like-for-like comparison using a range of standard ResNet architectures and it is shown that despite using ResNet encoders, RELIC v2 is comparable to state-of-theart self-supervised vision transformers.
Constrained Mean Shift Using Distant Yet Related Neighbors for Representation Learning
TLDR
This work proposes to generalize MSF algorithm by constraining the search space for nearest neighbors, and shows that this method outperforms MSF in SSL setting when the constraint utilizes a different augmentation of an image, and outperforms PAWS in semi-supervised setting with less training resources when the constraints ensures the NNs have the same pseudolabel as the query.
Debiased Learning from Naturally Imbalanced Pseudo-Labels
TLDR
This work proposes a novel and effective debiased learning method with pseudo-labels, based on counterfactual reasoning and adaptive margins: the former removes the classifier response bias, whereas the latter adjusts the margin of each class according to the imbalance of pseudo-Labels.
Distribution-Aware Semantics-Oriented Pseudo-label for Imbalanced Semi-Supervised Learning
TLDR
This paper addresses the relatively under-explored problem, imbalanced semi-supervised learning, where heavily biased pseudo-labels can harm the model performance and proposes a general pseudo-labeling framework to address the bias motivated by this observation.
Masked Siamese Networks for Label-Efficient Learning
TLDR
This work proposes Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations that improves the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification.
AggMatch: Aggregating Pseudo Labels for Semi-Supervised Learning
TLDR
This paper introduces an aggregation module for consistency regularization framework that aggregates the initial pseudo labels based on the similarity between the instances, and proposes a novel uncertaintybased confidence measure for the pseudo label by considering the consensus among multiple hypotheses with different subsets of the queue.
...
...

References

SHOWING 1-10 OF 59 REFERENCES
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments
TLDR
This paper proposes an online algorithm, SwAV, that takes advantage of contrastive methods without requiring to compute pairwise comparisons, and uses a swapped prediction mechanism where it predicts the cluster assignment of a view from the representation of another view.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
TLDR
This paper demonstrates the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling, and shows that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks.
Big Self-Supervised Models are Strong Semi-Supervised Learners
TLDR
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLRs), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
S4L: Self-Supervised Semi-Supervised Learning
TLDR
It is shown that S4L and existing semi-supervised methods can be jointly trained, yielding a new state-of-the-art result on semi- supervised ILSVRC-2012 with 10% of labels.
Semi-Supervised Learning by Label Gradient Alignment
TLDR
This work presents label gradient alignment, a novel algorithm for semi-supervised learning which imputes labels for the unlabeled data and trains on the imputed labels and demonstrates state-of-the-art accuracy in semi- supervised CIFAR-10 classification.
Self-labelling via simultaneous clustering and representation learning
TLDR
The proposed novel and principled learning formulation is able to self-label visual data so as to train highly competitive image representations without manual labels and yields the first self-supervised AlexNet that outperforms the supervised Pascal VOC detection baseline.
Bootstrap Your Own Latent: A New Approach to Self-Supervised Learning
TLDR
This work introduces Bootstrap Your Own Latent (BYOL), a new approach to self-supervised image representation learning that performs on par or better than the current state of the art on both transfer and semi- supervised benchmarks.
EnAET: Self-Trained Ensemble AutoEncoding Transformations for Semi-Supervised Learning
TLDR
This study trains an Ensemble of Auto-Encoding Transformations (EnAET) to learn from both labeled and unlabeled data based on the embedded representations by decoding both spatial and non-spatial transformations under a rich family of transformations.
Meta Pseudo Labels
We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90.2% on ImageNet, which is 1.6% better than the existing state-of-the-art
A Simple Framework for Contrastive Learning of Visual Representations
TLDR
It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.
...
...