• Corpus ID: 246015358

Contrastive Regularization for Semi-Supervised Learning

@article{Lee2022ContrastiveRF,
  title={Contrastive Regularization for Semi-Supervised Learning},
  author={Doyup Lee and Sungwoong Kim and Ildoo Kim and Yeongjae Cheon and Minsu Cho and Wook-Shin Han},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.06247}
}
Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance. In this study, we analyze that the consistency regularization restricts the propagation of labeling information due to the exclusion of samples with unconfident pseudo-labels in the model updates. Then, we propose contrastive regularization to improve both efficiency and accuracy of the consistency… 

References

SHOWING 1-10 OF 34 REFERENCES
Pseudo-Labeling and Confirmation Bias in Deep Semi-Supervised Learning
TLDR
This work shows that a naive pseudo-labeling overfits to incorrect pseudo-labels due to the so-called confirmation bias and demonstrates that mixup augmentation and setting a minimum number of labeled samples per mini-batch are effective regularization techniques for reducing it.
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
CoMatch: Semi-supervised Learning with Contrastive Graph Regularization
TLDR
CoMatch is a new semi-supervised learning method that unifies dominant approaches and addresses their limitations, and achieves substantial accuracy improvements on the label-scarce CIFAR-10 and STL-10.
FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence
TLDR
This paper demonstrates the power of a simple combination of two common SSL methods: consistency regularization and pseudo-labeling, and shows that FixMatch achieves state-of-the-art performance across a variety of standard semi-supervised learning benchmarks.
Semi-supervised Learning by Entropy Minimization
TLDR
This framework, which motivates minimum entropy regularization, enables to incorporate unlabeled data in the standard supervised learning, and includes other approaches to the semi-supervised problem as particular or limiting cases.
Label Propagation for Deep Semi-Supervised Learning
TLDR
This work employs a transductive label propagation method that is based on the manifold assumption to make predictions on the entire dataset and use these predictions to generate pseudo-labels for the unlabeled data and train a deep neural network.
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
TLDR
An unsupervised loss function is proposed that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network.
Big Self-Supervised Models are Strong Semi-Supervised Learners
TLDR
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2 (a modification of SimCLRs), supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
Temporal Ensembling for Semi-Supervised Learning
TLDR
Self-ensembling is introduced, where it is shown that this ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training.
Supervised Contrastive Learning
TLDR
A novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations is proposed, and the batch contrastive loss is modified, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting.
...
...