AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence

@article{Gong2021AlphaMatchIC,
  title={AlphaMatch: Improving Consistency for Semi-supervised Learning with Alpha-divergence},
  author={Chengyue Gong and Dilin Wang and Qiang Liu},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={13678-13687}
}
Semi-supervised learning (SSL) is a key approach toward more data-efficient machine learning by jointly leverage both labeled and unlabeled data. We propose AlphaMatch, an efficient SSL method that leverages data augmentations, by efficiently enforcing the label consistency between the data points and the augmented data derived from them. Our key technical contribution lies on: 1) using alpha-divergence to prioritize the regularization on data with high confidence, achieving similar effect as… Expand

Figures and Tables from this paper

Credal Self-Supervised Learning
TLDR
The key idea is to let the learner itself iteratively generate “pseudo-supervision” for unlabeled instances based on its current hypothesis, and to learn from weakly labeled data of that kind, the authors leverage methods that have recently been proposed in the realm of so-called superset learning. Expand

References

SHOWING 1-10 OF 51 REFERENCES
ReMixMatch: Semi-Supervised Learning with Distribution Matching and Augmentation Anchoring
TLDR
A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between 5 times and 16 times less data to reach the same accuracy. Expand
MixMatch: A Holistic Approach to Semi-Supervised Learning
TLDR
This work unify the current dominant approaches for semi-supervised learning to produce a new algorithm, MixMatch, that works by guessing low-entropy labels for data-augmented unlabeled examples and mixing labeled and unlabeling data using MixUp. Expand
ReMixMatch: Semi-Supervised Learning with Distribution Alignment and Augmentation Anchoring
TLDR
A variant of AutoAugment which learns the augmentation policy while the model is being trained, and is significantly more data-efficient than prior work, requiring between $5\times and $16\times less data to reach the same accuracy. Expand
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. Expand
Temporal Ensembling for Semi-Supervised Learning
TLDR
Self-ensembling is introduced, where it is shown that this ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training. Expand
Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results
TLDR
The recently proposed Temporal Ensembling has achieved state-of-the-art results in several semi-supervised learning benchmarks, but it becomes unwieldy when learning large datasets, so Mean Teacher, a method that averages model weights instead of label predictions, is proposed. Expand
Unsupervised Data Augmentation
TLDR
UDA has a small twist in that it makes use of harder and more realistic noise generated by state-of-the-art data augmentation methods, which leads to substantial improvements on six language tasks and three vision tasks even when the labeled set is extremely small. Expand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Analyzing the effectiveness and applicability of co-training
TLDR
It is demonstrated that when learning from labeled and unlabeled data, algorithms explicitly leveraging a natural independent split of the features outperform algorithms that do not and may out-perform algorithms not using a split. Expand
Semi-supervised Learning by Entropy Minimization
TLDR
This framework, which motivates minimum entropy regularization, enables to incorporate unlabeled data in the standard supervised learning, and includes other approaches to the semi-supervised problem as particular or limiting cases. Expand
...
1
2
3
4
5
...