High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

@article{Wang2018HighResolutionIS,
  title={High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs},
  author={Ting-Chun Wang and Ming-Yu Liu and Jun-Yan Zhu and Andrew Tao and Jan Kautz and Bryan Catanzaro},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={8798-8807}
}
We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs. [] Key Method Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category.

Mask Embedding in conditional GAN for Guided Synthesis of High Resolution Images

TLDR
This work proposes to use mask embedding mechanism to allow for a more efficient initial feature projection in the generator, and can generate realistic and high resolution facial images up to the resolution of 512*512 with a mask guidance.

SEMANTIC IMAGE SYNTHESIS

TLDR
A novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results, and is able to synthesize diverse and high-quality images while only using an adversarial loss, without any external supervision.

Semantic Image Synthesis with Trilateral Generative Adversarial Networks

TLDR
A novel trilateral Generative Adversarial Network (trilateral GAN) is proposed, which has fewer parameters than other recent methods to synthesize 512*1024 images with high fidelity and improves the semantic consistency loss and feature matching loss by using the features before activation.

Semantic Image Synthesis Manipulation for Stability Problem using Generative Adversarial Networks: A Survey

TLDR
This survey discussed the Generative Adversarial Networks (GANs) model because of the ability to synthesize good samples directly and a literature discussion between different methods used to improve the result of GAN have been discussed which aims to produce better results and generate more samples.

Semanticgan: Generative Adversarial Networks For Semantic Image To Photo-Realistic Image Translation

TLDR
A SemanticGAN to synthesize high resolution image with fine details and realistic textures from the semantic label map and a Semantic Information Preserved Loss (SIPL) to maintain semantic information in the process of the generation via a segmentation model are proposed.

Navigating the GAN Parameter Space for Semantic Image Editing

TLDR
This paper significantly expands the range of visual effects achievable with the state-of-the-art models, like StyleGAN2, and discovers interpretable directions in the space of the generator parameters, which are an excellent source of non-trivial semantic manipulations.

You Only Need Adversarial Supervision for Semantic Image Synthesis

TLDR
This work proposes a novel, simplified GAN model, which needs only adversarial supervision to achieve high quality results, and re-designs the discriminator as a semantic segmentation network, directly using the given semantic label maps as the ground truth for training.

Collaging Class-specific GANs for Semantic Image Synthesis

TLDR
Experiments show that this new approach for high resolution semantic image synthesis can generate high quality images in high resolution while having flexibility of object-level control by using class-specific generators.

MAGECally invert images for realistic editing

TLDR
This work proposes a novel instance-optimization based inversion method, which specifically aims to maximize the semantic information of the latent vector, all while producing an accurate reconstruction, and introduces the iMAGe-latEnt Consistency loss (“MAGEC”), which allows supervision in the latent space, encouraging editability of the resulting latent vector.
...

References

SHOWING 1-10 OF 69 REFERENCES

Semantic Image Synthesis via Adversarial Learning

TLDR
An end-to-end neural architecture that leverages adversarial learning to automatically learn implicit loss functions, which are optimized to fulfill the aforementioned two requirements of being realistic while matching the target text description.

StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks

TLDR
This paper proposes Stacked Generative Adversarial Networks (StackGAN) to generate 256 photo-realistic images conditioned on text descriptions and introduces a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold.

Scribbler: Controlling Deep Image Synthesis with Sketch and Color

TLDR
A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.

Photographic Image Synthesis with Cascaded Refinement Networks

  • Qifeng ChenV. Koltun
  • Computer Science
    2017 IEEE International Conference on Computer Vision (ICCV)
  • 2017
TLDR
It is shown that photographic images can be synthesized from semantic layouts by a single feedforward network with appropriate structure, trained end-to-end with a direct regression objective.

Image-to-Image Translation with Conditional Adversarial Networks

TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Generative Visual Manipulation on the Natural Image Manifold

TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.

Progressive Growing of GANs for Improved Quality, Stability, and Variation

TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.

Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
...