• Corpus ID: 13123084

Temporal Ensembling for Semi-Supervised Learning

@article{Laine2017TemporalEF,
  title={Temporal Ensembling for Semi-Supervised Learning},
  author={Samuli Laine and Timo Aila},
  journal={ArXiv},
  year={2017},
  volume={abs/1610.02242}
}
In this paper, we present a simple and efficient method for training deep neural networks in a semi-supervised setting where only a small portion of training data is labeled. [...] Key Result Finally, we demonstrate good tolerance to incorrect labels.Expand
Semi-Supervised Learning Enabled by Multiscale Deep Neural Network Inversion
TLDR
This paper develops a general loss function enabling DNNs of any topology to be trained in a semi-supervised manner without extra hyper-parameters, and demonstrates that it reaches state-of-the-art performance on the SVHN and CIFAR10 data sets.
SELF: Learning to Filter Noisy Labels with Self-Ensembling
TLDR
This work presents a simple and effective method self-ensemble label filtering (SELF) to progressively filter out the wrong labels during training that substantially outperforms all previous works on noise-aware learning across different datasets and can be applied to a broad set of network architectures.
Unsupervised Data Augmentation for Consistency Training
TLDR
A new perspective on how to effectively noise unlabeled examples is presented and it is argued that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
LaplaceNet: A Hybrid Energy-Neural Model for Deep Semi-Supervised Classification
TLDR
A new framework, LaplaceNet, is proposed for deep semi-supervised classification that has a greatly reduced model complexity and it is demonstrated, through rigorous experimentation, that a multi-sampling augmentation approach improves generalisation and reduces the sensitivity of the network to augmentation.
Tri-net for Semi-Supervised Deep Learning
TLDR
This paper proposes tri-net, a deep neural network which is able to use massive unlabeled data to help learning with limited labeled data, and considers model initialization, diversity augmentation and pseudo-label editing simultaneously simultaneously.
Semi-supervised Text Classification with Temporal Ensembling
  • Rong Xiang, Shiqun Yin
  • 2021 International Conference on Computer Communication and Artificial Intelligence (CCAI)
  • 2021
Supervised learning algorithms require a lot of labeled data to train the model while obtaining labeled data for large datasets is costly and time consuming. Semi-supervised learning algorithms can
ROBUST TEMPORAL ENSEMBLING
  • 2020
Successful training of deep neural networks with noisy labels is an essential capability as most real-world datasets contain some amount of mislabeled data. Left unmitigated, label noise can sharply
Lautum Regularization for Semi-Supervised Transfer Learning
TLDR
The theory suggests that one may improve the transferability of a deep neural network by imposing a Lautum information based regularization that relates the network weights to the target data.
Robust Temporal Ensembling for Learning with Noisy Labels
TLDR
Robust temporal ensembling (RTE) is presented, which combines robust loss with semi-supervised regularization methods to achieve noiserobust learning and achieves state-of-the-art performance across the CIFar-10, CIFAR-100, ImageNet, WebVision, and Food-101N datasets.
EnAET: Self-Trained Ensemble AutoEncoding Transformations for Semi-Supervised Learning
TLDR
This study trains an Ensemble of Auto-Encoding Transformations (EnAET) to learn from both labeled and unlabeled data based on the embedded representations by decoding both spatial and non-spatial transformations under a rich family of transformations.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 53 REFERENCES
Regularization With Stochastic Transformations and Perturbations for Deep Semi-Supervised Learning
TLDR
An unsupervised loss function is proposed that takes advantage of the stochastic nature of these methods and minimizes the difference between the predictions of multiple passes of a training sample through the network.
Mutual exclusivity loss for semi-supervised deep learning
TLDR
An unsupervised regularization term is proposed that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data.
Snapshot Ensembles: Train 1, get M for free
TLDR
This paper proposes a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost by training a single neural network, converging to several local minima along its optimization path and saving the model parameters.
Training Deep Neural Networks on Noisy Labels with Bootstrapping
TLDR
A generic way to handle noisy and incomplete labeling by augmenting the prediction objective with a notion of consistency is proposed, which considers a prediction consistent if the same prediction is made given similar percepts, where the notion of similarity is between deep network features computed from the input data.
Learning with Pseudo-Ensembles
TLDR
A novel regularizer based on making the behavior of a pseudo-ensemble robust with respect to the noise process generating it is presented, which naturally extends to the semi-supervised setting, where it produces state-of-the-art results.
Training Convolutional Networks with Noisy Labels
TLDR
An extra noise layer is introduced into the network which adapts the network outputs to match the noisy label distribution and can be estimated as part of the training process and involve simple modifications to current training infrastructures for deep networks.
Semi-supervised Learning with Deep Generative Models
TLDR
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.
Making Deep Neural Networks Robust to Label Noise: A Loss Correction Approach
TLDR
It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and it is shown how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and providing an end-to-end framework.
Making Neural Networks Robust to Label Noise: a Loss Correction Approach
TLDR
It is proved that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise, and the proposed procedures for loss correction simply amount to at most a matrix inversion and multiplication.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
...
1
2
3
4
5
...