• Corpus ID: 18507866

Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks

@inproceedings{Lee2013PseudoLabelT,
  title={Pseudo-Label : The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks},
  author={Dong-Hyun Lee},
  year={2013}
}
We propose the simple and efficient method of semi-supervised learning for deep neural networks. Basically, the proposed network is trained in a supervised fashion with labeled and unlabeled data simultaneously. For unlabeled data, Pseudo-Labels, just picking up the class which has the maximum predicted probability, are used as if they were true labels. This is in effect equivalent to Entropy Regularization. It favors a low-density separation between classes, a commonly assumed prior for semi… 

Figures and Tables from this paper

Pseudo-Labeling Using Gaussian Process for Semi-Supervised Deep Learning

This work proposes a simple and novel method of utilizing unlabeled data for semi-supervised learning to improve the performance of the deep learning model, and generalizes proposed pseudo-labeling method to any classifier having good performance and gives some advice for pseudo- labeling method selection.

Semi-supervised Deep Learning Using Improved Unsupervised Discriminant Projection

Modify the unsupervised discriminant projection algorithm from dimension reduction and apply it as a regularization term to propose a new semi-supervised deep learning algorithm, which is able to utilize both the local and nonlocal distribution of abundant unlabeled samples to improve classification performance.

Pseudo-label Selection for Deep Semi-supervised Learning

A confidence-based pseudo-labeling method to choose high quality pseudo-labels with a novel uncertainty model is proposed and results show that the proposed method is able to improve the performance for deep semi-supervised learning.

Learning by Association — A Versatile Semi-Supervised Training Method for Neural Networks

This work proposes a new framework for semi-supervised training of deep neural networks inspired by learning in humans and demonstrates the capabilities of learning by association on several data sets and shows that it can improve performance on classification tasks tremendously by making use of additionally available unlabeled data.

Semi-supervised Deep Learning for Fully Convolutional Networks

This work lifts the concept of auxiliary manifold embedding for semi-supervised learning to FCNs with the help of Random Feature Embedding and leverages the proposed framework for the purpose of domain adaptation.

Two Semi-supervised Approaches to Malware Detection with Neural Networks

This paper compares two semi-supervised algorithms for deep neural networks on a large real-world malware dataset and evaluates the performance of a rather straightforward method called Pseudo-labeling, which uses unlabeled samples, classified with high confidence, as if they were the actual labels.

Semi-supervised Learning with Contrastive Predicative Coding

This paper explores recently developed contrastive predictive coding technique to improve discriminative power of deep learning models when a large portion of labels are absent, and proposes two models, cpc-SSL and a class conditional variant~.

Semi-supervised Learning Using Siamese Networks

A new training method for semi-supervised learning that is based on similarity function learning using a Siamese network to obtain a suitable embedding and an empirical study of this iterative self-training algorithm is performed.

Deep Bayesian Active Semi-Supervised Learning

A new method is presented that combines active and semi-supervised deep learning to achieve high generalization performance from a deep convolutional neural network with as few known labels as possible and achieves an agile labeling process that achieves high accuracy, but requires only a small amount of known labels.
...

References

SHOWING 1-10 OF 18 REFERENCES

Semi-supervised learning of compact document representations with deep networks

An algorithm to learn text document representations based on semi-supervised autoencoders that are stacked to form a deep network that can be trained efficiently on partially labeled corpora, producing very compact representations of documents, while retaining as much class information and joint word statistics as possible.

Extracting and composing robust features with denoising autoencoders

This work introduces and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern.

Semi-Supervised Classification by Low Density Separation

Three semi-supervised algorithms are proposed: deriving graph-based distances that emphazise low density regions between clusters, followed by training a standard SVM, and optimizing the Transductive SVM objective function by gradient descent.

Deep learning via semi-supervised embedding

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a

Semi-Supervised Learning

  • Xiaojin Zhu
  • Computer Science
    Encyclopedia of Machine Learning
  • 2010
This entry will focus on the former case of learning a predictor, which is to learn a predictor that predicts future test data better than the predictor learned from the labeled training data alone.

Classification using discriminative restricted Boltzmann machines

This paper presents an evaluation of different learning algorithms for RBMs which aim at introducing a discriminative component to RBM training and improve their performance as classifiers, and demonstrates how discriminating RBMs can also be successfully employed in a semi-supervised setting.

Representation Learning: A Review and New Perspectives

Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.

Improving neural networks by preventing co-adaptation of feature detectors

When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the

Why Does Unsupervised Pre-training Help Deep Learning?

The results suggest that unsupervised pre-training guides the learning towards basins of attraction of minima that support better generalization from the training data set; the evidence from these results supports a regularization explanation for the effect of pre- training.

The Manifold Tangent Classifier

A representation learning algorithm can be stacked to yield a deep architecture and it is shown how it builds a topological atlas of charts, each chart being characterized by the principal singular vectors of the Jacobian of a representation mapping.