• Corpus ID: 2220097

Semi-Supervised Learning with GANs: Revisiting Manifold Regularization

@article{Lecouat2018SemiSupervisedLW,
  title={Semi-Supervised Learning with GANs: Revisiting Manifold Regularization},
  author={Bruno Lecouat and Chuan-Sheng Foo and Houssam Zenati and Vijay Ramaseshan Chandrasekhar},
  journal={ArXiv},
  year={2018},
  volume={abs/1805.08957}
}
GANS are powerful generative models that are able to model the manifold of natural images. We leverage this property to perform manifold regularization by approximating the Laplacian norm using a Monte Carlo approximation that is easily computed with the GAN. When incorporated into the feature-matching GAN of Improved GAN, we achieve state-of-the-art results for GAN-based semi-supervised learning on the CIFAR-10 dataset, with a method that is significantly easier to implement than competing… 

Figures and Tables from this paper

MR-GAN: Manifold Regularized Generative Adversarial Networks
TLDR
It is theoretically proved that the addition of this regularization term in any class of GANs including DCGAN and Wasserstein GAN leads to improved performance in terms of generalization, existence of equilibrium, and stability.
Semi-supervised learning based on generative adversarial network: a comparison between good GAN and bad GAN approach
TLDR
This paper performs a comprehensive comparison of GAN-based semi-supervised learning methods on different benchmark datasets to demonstrate their different properties on image generation, and sensitivity to the amount of labeled data provided.
Discriminative Regularization with Conditional Generative Adversarial Nets for Semi-Supervised Learning
TLDR
A novel discriminative regularization method for semi-supervised learning with conditional generative adversarial nets (CGANs) that encourages the classifier invariance to local perturbations on the sub-manifold of each cluster, and distinct classification outputs for data points in different clusters is proposed.
TANGENT-NORMAL ADVERSARIAL REGULARIZATION
  • Computer Science
  • 2018
TLDR
This work proposes a novel regularization called the tangent-normal adversarial regularization, which is composed by two parts that jointly enforce the smoothness along two different directions that are crucial for semi-supervised learning.
Tangent-Normal Adversarial Regularization for Semi-Supervised Learning
TLDR
The proposed tangent-normal adversarial regularization (TNAR) is proposed as an extension of VAT by taking the data manifold into consideration, and jointly outperforms other state-of-the-art methods for semi-supervised learning.
Semi-supervised learning using adversarial training with good and bad samples
TLDR
This work presents unified-GAN (UGAN), a novel framework that enables a classifier to simultaneously learn from both good and bad samples through adversarial training and achieves competitive performance among other GAN-based models, and is robust to variations in the amount of labeled data used for training.
Semi-supervised self-growing generative adversarial networks for image recognition
TLDR
This paper proposes a semi-supervised self-growing generative adversarial network (SGGAN), and addresses two main problems in label inference: how to measure the confidence of the unlabeled data and how to generalize the classifier, via the generative framework and a novel convolution-block-transformation technique.
CCS-GAN: a semi-supervised generative adversarial network for image classification
TLDR
A novel CCS-GAN model for semi-supervised image classification, which aims to improve its classification ability by utilizing the cluster structure of unlabeled images and ’bad’ generated images and adopts an enhanced feature matching approach to encourage its generator to produce adversarial images from the low-density regions of real distribution.
Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy
TLDR
This work proposes a general regularizer called Patch-level Neighborhood Interpolation that fully exploits the relationship between samples and can be applied to enhance two popular regularization strategies, namely Virtual Adversarial Training (VAT) and MixUp, yielding their neighborhood versions.
Adversarial Partial Multi-Label Learning
TLDR
A novel adversarial learning model, PML-GAN, under a generalized encoder-decoder framework for partial multi-label learning is proposed, which enhances the correspondence of input features with the output labels in a bi-directional mapping.
...
...

References

SHOWING 1-10 OF 21 REFERENCES
Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference
TLDR
This work proposes enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample as well as providing insights into how fake examples influence the semi-supervised learning procedure.
Good Semi-supervised Learning That Requires a Bad GAN
TLDR
Theoretically, it is shown that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and a novel formulation based on the analysis that substantially improves over feature matching GANs is derived, obtaining state-of-the-art results on multiple benchmark datasets.
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
TLDR
A semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner is proposed and properties of reproducing kernel Hilbert spaces are used to prove new Representer theorems that provide theoretical basis for the algorithms.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Unsupervised and Semi-supervised Learning with Categorical Generative Adversarial Networks
In this paper we present a method for learning a discriminative classifier from unlabeled or partially labeled data. Our approach is based on an objective function that trades-off mutual information
The Manifold Tangent Classifier
TLDR
A representation learning algorithm can be stacked to yield a deep architecture and it is shown how it builds a topological atlas of charts, each chart being characterized by the principal singular vectors of the Jacobian of a representation mapping.
Temporal Ensembling for Semi-Supervised Learning
TLDR
Self-ensembling is introduced, where it is shown that this ensemble prediction can be expected to be a better predictor for the unknown labels than the output of the network at the most recent training epoch, and can thus be used as a target for training.
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
TLDR
A new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input that achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
Generative Visual Manipulation on the Natural Image Manifold
TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.
Semi-supervised Learning with Ladder Networks
TLDR
This work builds on top of the Ladder network proposed by Valpola which is extended by combining the model with supervision and shows that the resulting model reaches state-of-the-art performance in semi-supervised MNIST and CIFAR-10 classification in addition to permutation-invariant MNIST classification with all labels.
...
...