• Corpus ID: 155100140

Semi-supervised learning based on generative adversarial network: a comparison between good GAN and bad GAN approach

@article{Li2019SemisupervisedLB,
  title={Semi-supervised learning based on generative adversarial network: a comparison between good GAN and bad GAN approach},
  author={Wenyuan Li and Zichen Wang and Jiayun Li and Jennifer Polson and W. Speier and Corey W. Arnold},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.06484}
}
Recently, semi-supervised learning methods based on generative adversarial networks (GANs) have received much attention. [] Key Result By comprehensively comparing these two methods, we hope to shed light on the future of GAN-based semi-supervised learning.
Semi-supervised learning using adversarial training with good and bad samples
TLDR
This work presents unified-GAN (UGAN), a novel framework that enables a classifier to simultaneously learn from both good and bad samples through adversarial training and achieves competitive performance among other GAN-based models, and is robust to variations in the amount of labeled data used for training.
Revisiting ImprovedGAN with Metric Learning for Semi-Supervised Learning
TLDR
A variant of ImprovedGAN is proposed, called Intensified ImprovedGAN, where its cluster separation characteristic is improved by two proposed techniques: (a) the unsupervised discriminator loss is scaled up and (b) the generated batch size is enlarged, which produces better class-wise cluster separation and, hence, generalization.
Semi-supervised Adversarial Active Learning on Attributed Graphs
TLDR
A SEmi-supervised Adversarial active Learning (SEAL) framework on attributed graphs is proposed, which fully leverages the representation power of deep neural networks and devises a novel AL query strategy in an adversarial way.
Good Semi-supervised VAE Requires Tighter Evidence Lower Bound
TLDR
One-stage Semi-suPervised Optimal Transport VAE (OSPOT-VAE), a one-stage deep generative model that theoretically unifies the generation and classification loss in one ELBO framework and achieves a tighter ELBO by applying the optimal transport scheme to the distribution of latent variables.
CGT: Consistency Guided Training in Semi-Supervised Learning
TLDR
A framework, CGT, for semi-supervised learning (SSL), that involves a unification of multiple image-based augmentation techniques and introduces a generalization of the Mixup operator that regularizes a larger region of the input space.
Semi-supervised Deep Learning for Image Classification with Distribution Mismatch: A Survey
TLDR
Emphasis is made in semi-supervised deep learning models designed to deal with a distribution mismatch between the labelled and unlabelled datasets, to overcome the high data demand of traditional deep learning pipelines under real-world usage settings.
Semi-Supervised GANs with Complementary Generator Pair for Retinopathy Screening
TLDR
Experimental results on integrated three public iChallenge datasets show that the proposed GBGANs could fully utilize the available fundus images to identify retinopathy with little label cost.
SEAL: Semisupervised Adversarial Active Learning on Attributed Graphs
TLDR
A SEmisupervised Adversarial active Learning (SEAL) framework on attributed graphs is proposed, which fully leverages the representation power of deep neural networks and devises a novel AL query strategy for node classification in an adversarial way.
Voxel-wise Adversarial Semi-supervised Learning for Medical Image Segmentation
TLDR
This paper introduces a novel adversarial learning-based semi-supervised segmentation method that effectively embeds both local and global features from multiple hidden layers and learns context relations between multiple classes.
...
...

References

SHOWING 1-10 OF 25 REFERENCES
Good Semi-supervised Learning That Requires a Bad GAN
TLDR
Theoretically, it is shown that given the discriminator objective, good semisupervised learning indeed requires a bad generator, and a novel formulation based on the analysis that substantially improves over feature matching GANs is derived, obtaining state-of-the-art results on multiple benchmark datasets.
Semi-supervised Learning with GANs: Manifold Invariance with Improved Inference
TLDR
This work proposes enhancements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improves in terms of semantic similarity of the reconstructed sample with the input sample as well as providing insights into how fake examples influence the semi-supervised learning procedure.
Triangle Generative Adversarial Networks
A Triangle Generative Adversarial Network ($\Delta$-GAN) is developed for semi-supervised cross-domain joint distribution matching, where the training data consists of samples from each domain, and
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Triple Generative Adversarial Nets
TLDR
Triple-GAN as a unified model can simultaneously achieve the state-of-the-art classification results among deep generative models, and disentangle the classes and styles of the input and transfer smoothly in the data space via interpolation in the latent space class-conditionally.
Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning
TLDR
A new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input that achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Semi-Supervised Learning with GANs: Revisiting Manifold Regularization
TLDR
This work achieves state-of-the-art results for GAN-based semi-supervised learning on the CIFAR-10 dataset, with a method that is significantly easier to implement than competing methods.
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Mutual exclusivity loss for semi-supervised deep learning
TLDR
An unsupervised regularization term is proposed that explicitly forces the classifier's prediction for multiple classes to be mutually-exclusive and effectively guides the decision boundary to lie on the low density space between the manifolds corresponding to different classes of data.
...
...