AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence

@article{Gong2021AlphaMatchIC,
title={AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence},
author={Chengyue Gong and Dilin Wang and Qiang Liu},
journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2021},
pages={13678-13687}
}
• Published 23 November 2020
• Computer Science
• 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Semi-supervised learning (SSL) is a key approach toward more data-efficient machine learning by jointly leverage both labeled and unlabeled data. We propose AlphaMatch, an efficient SSL method that leverages data augmentations, by efficiently enforcing the label consistency between the data points and the augmented data derived from them. Our key technical contribution lies on: 1) using alpha-divergence to prioritize the regularization on data with high confidence, achieving similar effect as…
3 Citations

Figures and Tables from this paper

Credal Self-Supervised Learning
• Computer Science, Mathematics
ArXiv
• 2021
The key idea is to let the learner itself iteratively generate “pseudo-supervision” for unlabeled instances based on its current hypothesis, and to learn from weakly labeled data of that kind, the authors leverage methods that have recently been proposed in the realm of so-called superset learning.
• Chengyue Gong
• 2021
Many machine learning tasks have to make a trade-off between two loss functions, typically the main data-fitness loss and an auxiliary loss. The most widely used approach is to optimize the linear
Deep Semi-Supervised Image Classification Algorithms: a Survey
• JUCS - Journal of Universal Computer Science
• 2021
Semi-supervised learning is a branch of machine learning focused on improving the performance of models when the labeled data is scarce, but there is access to large number of unlabeled examples.

References

SHOWING 1-10 OF 51 REFERENCES
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy.
MixMatch: A Holistic Approach to Semi-Supervised Learning
• Computer Science, Mathematics
NeurIPS
• 2019
This work unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeling data using MixUp.
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between $5\times and$16\times less data to reach the same accuracy.
Unsupervised Data Augmentation for Consistency Training
• Computer Science, Mathematics
NeurIPS
• 2020
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Temporal Ensembling for Semi-Supervised Learning
• Computer Science
ICLR
• 2017
Self-ensembling is introduced, where it is shown that this ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training.
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
• Computer Science, Mathematics
NIPS
• 2017
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks, but it becomes unwieldy when learning large datasets, so Mean Teacher, a method that averages model weights instead of label predictions, is proposed.
Unsupervised Data Augmentation
• Computer Science
ArXiv
• 2019
UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods, which leads to substantial improvements on six language tasks and three vision tasks even when the labeled set is extremely small.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
Analyzing the effectiveness and applicability of co-training
• Computer Science
CIKM '00
• 2000
It is demonstrated that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not and may out-perform algorithms not using a split.
Semi-supervised Learning by Entropy Minimization
• Computer Science, Mathematics
CAP
• 2004
This framework, which motivates minimum entropy regularization, enables to incorporate unlabeled data in the standard supervised learning, and includes other approaches to the semi-supervised problem as particular or limiting cases.