Corpus ID: 12803511

Conditional Generative Adversarial Nets

  title={Conditional Generative Adversarial Nets},
  author={Mehdi Mirza and Simon Osindero},
Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an… Expand
Regularized Generative Adversarial Network
A framework for generating samples from a probability distribution that differs from the probability distribution of the training set is proposed and it is shown that it can be used to learn some pre-specified notions in topology (basic topology properties). Expand
Conditional generative adversarial nets for convolutional face generation
We apply an extension of generative adversarial networks (GANs) [8] to a conditional setting. In the GAN framework, a “generator” network is tasked with fooling a “discriminator” network intoExpand
Conditional Generative Recurrent Adversarial Networks
Conditional recurrent GAN is proposed that outperforms the other two models and can be used to generate state-of-the-art images. Expand
Ways of Conditioning Generative Adversarial Networks
This work proposes novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10 and introduces two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms Bilinear features derived from the spatial cross product of an image and a condition vector. Expand
Adversarial Out-domain Examples for Generative Models
It is shown how a malicious user can force a pre-trained generator to reproduce arbitrary data instances by feeding it suitable adversarial inputs and how these adversarial latent vectors can be shaped so as to be statistically indistinguishable from the set of genuine inputs. Expand
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is toExpand
Semi-Supervised Learning with Generative Adversarial Networks
This work extends Generative Adversarial Networks to the semi-supervised context by forcing the discriminator network to output class labels and shows that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN. Expand
Can adversarial training learn image captioning ?
This attempt is the first that uses no pre-training or reinforcement methods to create an adversarial architecture related to the conditional GAN (cGAN) that generates sentences according to a given image (also called image captioning). Expand
Image Generation from Captions Using Dual-Loss Generative Adversarial Networks
Deep Convolutional Generative Adversarial Networks (DCGANs) have become popular in recent months for their ability to effectively capture image distributions and generate realistic images. RecentExpand
Generative adversarial networks (GANs) transform latent vectors into visually plausible images. It is generally thought that the original GAN formulation gives no out-of-the-box method to reverse theExpand


Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Deep Generative Stochastic Networks Trainable by Backprop
Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood. Expand
Maxout Networks
A simple new model called maxout is defined designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique. Expand
DeViSE: A Deep Visual-Semantic Embedding Model
This paper presents a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text and shows that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Expand
Multi-Prediction Deep Boltzmann Machines
The multi-prediction deep Boltzmann machine does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks. Expand
Multimodal Neural Language Models
This work introduces two multimodal neural language models: models of natural language that can be conditioned on other modalities and imagetext modelling, which can generate sentence descriptions for images without the use of templates, structured prediction, and/or syntactic trees. Expand
Multimodal learning with deep Boltzmann machines
A Deep Boltzmann Machine is proposed for learning a generative model of multimodal data and it is shown that the model can be used to create fused representations by combining features across modalities, which are useful for classification and information retrieval. Expand
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual RecognitionExpand
ImageNet classification with deep convolutional neural networks
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Improving neural networks by preventing co-adaptation of feature detectors
When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of theExpand