Corpus ID: 234469899

Directional GAN: A Novel Conditioning Strategy for Generative Networks

  title={Directional GAN: A Novel Conditioning Strategy for Generative Networks},
  author={Shradha Agrawal and Shankar Venkitachalam and Dhanya Raghu and Deepak Pai},
Image content is a predominant factor in marketing campaigns, websites and banners. Today, marketers and designers spend considerable time and money in generating such professional quality content. We take a step towards simplifying this process using Generative Adversarial Networks (GANs). We propose a simple and novel conditioning strategy which allows generation of images conditioned on given semantic attributes using a generator trained for an unconditional image generation task. Our… Expand

Figures and Tables from this paper


A Style-Based Generator Architecture for Generative Adversarial Networks
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
Precise Recovery of Latent Vectors from Generative Adversarial Networks
Stochastic clipping is introduced, a simple, gradient-based technique called stochastic clipping that precisely recover their latent vector pre-images 100% of the time and appears to recover unique encodings for unseen images. Expand
Interpreting the Latent Space of GANs for Semantic Face Editing
This work proposes a novel framework, called InterFaceGAN, for semantic face editing by interpreting the latent semantics learned by GANs, and finds that the latent code of well-trained generative models actually learns a disentangled representation after linear transformations. Expand
Adversarial Feature Learning
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning. Expand
Conditional Generative Adversarial Nets
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels. Expand
Image-to-Image Translation with Conditional Adversarial Networks
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
Progressive Growing of GANs for Improved Quality, Stability, and Variation
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
A Variational U-Net for Conditional Appearance and Shape Generation
A conditional U-Net is presented for shape-guided image generation, conditioned on the output of a variational autoencoder for appearance, trained end-to-end on images, without requiring samples of the same object with varying pose or appearance. Expand
Pose Guided Fashion Image Synthesis Using Deep Generative Model
This paper presents a novel deep generative model to transfer an image of a person from a given pose to a new pose while keeping fashion item consistent and demonstrates the results by conducting rigorous experiments on two data sets. Expand