# Conditional Generative Adversarial Nets

@article{Mirza2014ConditionalGA, title={Conditional Generative Adversarial Nets}, author={Mehdi Mirza and Simon Osindero}, journal={ArXiv}, year={2014}, volume={abs/1411.1784} }

Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an… Expand

#### Supplemental Content

Github Repo

Via Papers with Code

🚀 Variants of GANs most easily implemented as TensorFlow2. GAN, DCGAN, LSGAN, WGAN, WGAN-GP, DRAGAN, ETC...

#### Paper Mentions

#### 5,371 Citations

Regularized Generative Adversarial Network

- Computer Science
- ArXiv
- 2021

A framework for generating samples from a probability distribution that differs from the probability distribution of the training set is proposed and it is shown that it can be used to learn some pre-specified notions in topology (basic topology properties). Expand

Conditional generative adversarial nets for convolutional face generation

- 2015

We apply an extension of generative adversarial networks (GANs) [8] to a conditional setting. In the GAN framework, a “generator” network is tasked with fooling a “discriminator” network into… Expand

Conditional Generative Recurrent Adversarial Networks

- Computer Science
- Smart Intelligent Computing and Applications
- 2018

Conditional recurrent GAN is proposed that outperforms the other two models and can be used to generate state-of-the-art images. Expand

Ways of Conditioning Generative Adversarial Networks

- Computer Science, Mathematics
- ArXiv
- 2016

This work proposes novel methods of conditioning generative adversarial networks (GANs) that achieve state-of-the-art results on MNIST and CIFAR-10 and introduces two models: an information retrieving model that extracts conditional information from the samples, and a spatial bilinear pooling model that forms Bilinear features derived from the spatial cross product of an image and a condition vector. Expand

Adversarial Out-domain Examples for Generative Models

- Computer Science
- 2019 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW)
- 2019

It is shown how a malicious user can force a pre-trained generator to reproduce arbitrary data instances by feeding it suitable adversarial inputs and how these adversarial latent vectors can be shaped so as to be statistically indistinguishable from the set of genuine inputs. Expand

CONTEXT-CONDITIONAL GENERATIVE ADVERSARIAL NETWORKS

- 2016

We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss. Images with random patches removed are presented to a generator whose task is to… Expand

Semi-Supervised Learning with Generative Adversarial Networks

- Mathematics, Computer Science
- ArXiv
- 2016

This work extends Generative Adversarial Networks to the semi-supervised context by forcing the discriminator network to output class labels and shows that this method can be used to create a more data-efficient classifier and that it allows for generating higher quality samples than a regular GAN. Expand

Can adversarial training learn image captioning ?

- Computer Science
- ViGIL@NeurIPS
- 2019

This attempt is the first that uses no pre-training or reinforcement methods to create an adversarial architecture related to the conditional GAN (cGAN) that generates sentences according to a given image (also called image captioning). Expand

Image Generation from Captions Using Dual-Loss Generative Adversarial Networks

- 2016

Deep Convolutional Generative Adversarial Networks (DCGANs) have become popular in recent months for their ability to effectively capture image distributions and generate realistic images. Recent… Expand

GENERATIVE ADVERSARIAL NETWORKS

- 2017

Generative adversarial networks (GANs) transform latent vectors into visually plausible images. It is generally thought that the original GAN formulation gives no out-of-the-box method to reverse the… Expand

#### References

SHOWING 1-10 OF 21 REFERENCES

Generative Adversarial Nets

- Computer Science
- NIPS
- 2014

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a… Expand

Deep Generative Stochastic Networks Trainable by Backprop

- Mathematics, Computer Science
- ICML
- 2014

Theorems that generalize recent work on the probabilistic interpretation of denoising autoencoders are provided and obtain along the way an interesting justification for dependency networks and generalized pseudolikelihood. Expand

Maxout Networks

- Computer Science, Mathematics
- ICML
- 2013

A simple new model called maxout is defined designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique. Expand

DeViSE: A Deep Visual-Semantic Embedding Model

- Computer Science
- NIPS
- 2013

This paper presents a new deep visual-semantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text and shows that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Expand

Multi-Prediction Deep Boltzmann Machines

- Computer Science
- NIPS
- 2013

The multi-prediction deep Boltzmann machine does not require greedy layerwise pretraining, and outperforms the standard DBM at classification, classification with missing inputs, and mean field prediction tasks. Expand

Multimodal Neural Language Models

- Computer Science
- ICML
- 2014

This work introduces two multimodal neural language models: models of natural language that can be conditioned on other modalities and imagetext modelling, which can generate sentence descriptions for images without the use of templates, structured prediction, and/or syntactic trees. Expand

Multimodal learning with deep Boltzmann machines

- Computer Science
- J. Mach. Learn. Res.
- 2012

A Deep Boltzmann Machine is proposed for learning a generative model of multimodal data and it is shown that the model can be used to create fused representations by combining features across modalities, which are useful for classification and information retrieval. Expand

Going deeper with convolutions

- Computer Science
- 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2015

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition… Expand

ImageNet classification with deep convolutional neural networks

- Computer Science
- Commun. ACM
- 2012

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand

Improving neural networks by preventing co-adaptation of feature detectors

- Computer Science
- ArXiv
- 2012

When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the… Expand