• Corpus ID: 195750692

Adversarial Pixel-Level Generation of Semantic Images

  title={Adversarial Pixel-Level Generation of Semantic Images},
  author={Emanuele Ghelfi and Paolo Galeone and Michele De Simoni and Federico Di Mattia},
Generative Adversarial Networks (GANs) have obtained extraordinary success in the generation of realistic images, a domain where a lower pixel-level accuracy is acceptable. [...] Key Result The experimental evaluation shows that our architecture outperforms standard ones from both a quantitative and a qualitative point of view in many semantic image generation tasks.Expand
Decomposing Image Generation into Layout Prediction and Conditional Synthesis
This article investigates splitting the optimisation of generative adversarial networks into two parts, by first generating a semantic segmentation masks from noise and then translating that segmentation mask into an image.
Mixing Real and Synthetic Data to Enhance Neural Network Training - A Review of Current Approaches
This work examines different techniques available in the literature to improve training results without acquiring additional annotated real-world data by applying annotation-preserving transformations to existing data or by synthetically creating more data.
GAN Theft Auto: Autonomous Texturing of Procedurally Generated Interactive Cities
This work explores the possibility of producing photo-realistic and stylized videos from semantically segmented image sequences drawn from a procedurally generated interactive 3D environment and uses the GTA V image dataset to showcase its feasibility on large interactive scenes for 3D animation and game texturing.


Image-to-Image Translation with Conditional Adversarial Networks
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.
Semantic Segmentation using Adversarial Networks
An adversarial training approach to train semantic segmentation models that can detect and correct higher-order inconsistencies between ground truth segmentation maps and the ones produced by the segmentation net.
Improved Techniques for Training GANs
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Semantically Decomposing the Latent Spaces of Generative Adversarial Networks
A new algorithm for training generative adversarial networks that jointly learns latent codes for both identities and observations that can generate diverse images of the same subject and traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose.
Adversarial Feature Learning
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
Conditional Generative Adversarial Nets
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
Towards Principled Methods for Training Generative Adversarial Networks
The goal of this paper is to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks, and performs targeted experiments to substantiate the theoretical analysis and verify assumptions, illustrate claims, and quantify the phenomena.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.