Corpus ID: 236171203

Semantic Text-to-Face GAN -ST^2FG

  title={Semantic Text-to-Face GAN -ST^2FG},
  author={Manan Oza and Sukalpa Chanda and David S. Doermann},
Faces generated using generative adversarial networks (GANs) have reached unprecedented realism. These faces, also known as "Deep Fakes", appear as realistic photographs with very little pixel-level distortions. While some work has enabled the training of models that lead to the generation of specific properties of the subject, generating a facial image based on a natural language description has not been fully explored. For security and criminal identification, the ability to provide a GAN… Expand

Figures and Tables from this paper


Text2FaceGAN: Face Generation from Fine Grained Textual Descriptions
This paper generates captions for images in the CelebA dataset by creating an algorithm to automatically convert a list of attributes to a set of captions, and model the highly multi-modal problem of text to face generation as learning the conditional distribution of faces in same latent space. Expand
Conditional Image Generation and Manipulation for User-Specified Content
This work proposes a single pipeline for text-to-image generation and manipulation, and introduces textStyleGAN, a model that is conditioned on text that can be used to manipulate facial images for a wide range of attributes. Expand
Generative Adversarial Text to Image Synthesis
A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels. Expand
StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks
This paper proposes Stacked Generative Adversarial Networks (StackGAN) to generate 256 photo-realistic images conditioned on text descriptions and introduces a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold. Expand
TediGAN: Text-Guided Diverse Face Image Generation and Manipulation
This work proposes TediGAN, a novel framework for multi-modal image generation and manipulation with textual descriptions using a control mechanism based on style-mixing, and proposes the Multi-Modal CelebA-HQ, a large-scale dataset consisting of real face images and corresponding semantic segmentation map, sketch, and textual descriptions. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
AttGAN: Facial Attribute Editing by Only Changing What You Want
The proposed method is extended for attribute style manipulation in an unsupervised manner and outperforms the state-of-the-art on realistic attribute editing with other facial details well preserved. Expand
Text-Adaptive Generative Adversarial Networks: Manipulating Images with Natural Language
The text-adaptive generative adversarial network (TAGAN) is proposed to generate semantically manipulated images while preserving text-irrelevant contents of the original image. Expand
Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation
A new word-level discriminator is proposed, which provides the generator with fine-grained training feedback at word- level, to facilitate training a lightweight generator that has a small number of parameters, but can still correctly focus on specific visual attributes of an image, and then edit them without affecting other contents that are not described in the text. Expand
Image-to-Image Translation with Text Guidance
The goal of this paper is to embed controllable factors, i.e., natural language descriptions, into image-to-image translation with generative adversarial networks, which allows text descriptions toExpand