Image-to-Image Translation with Conditional Adversarial Networks

  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Phillip Isola and Jun-Yan Zhu and Tinghui Zhou and Alexei A. Efros},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. [] Key Result As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.

Unsupervised Image-to-Image Translation with Generative Adversarial Networks

This work develops a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data.

Perceptual Adversarial Networks With a Feature Pyramid for Image Translation

This paper decomposes the image into a set of images by a feature pyramid and elaborate separate loss components for images of specific bandpass and finds the overall perceptual adversarial loss is able to capture not only the semantic features but also the appearance.

Content and Colour Distillation for Learning Image Translations with the Spatial Profile Loss

This paper proposes a novel method of computing the loss directly between the source and target images that enable proper distillation of shape/content and colour/style and shows that this is useful in typical image-to-image translations.

Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).

Edge-guided Adversarial Network Based on Contrastive Learning for Image-to-Image Translation

The proposed method extracts edge feature from both domains of output and target, and minimizes the difference using a framework based on patchwise contrastive learning and outperforms existing approaches in the task of unpaired image-to-image translation across datasets.

Image-to-Image Translation Using Generative Adversarial Network

  • Kusam LataM. DaveK. Nishanth
  • Computer Science
    2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA)
  • 2019
Conditional GANs are used which translates the images based upon some conditions and the performance is also analyzed of the model by doing hyper-parameter tuning.

Unpaired Image-to-Image Translation using Adversarial Consistency Loss

This paper proposes a novel adversarial-consistency loss for image-to-image translation that does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistsency loss.

In2I: Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks

This paper introduces a Generative Adversarial Network (GAN) based framework along with a multi-modal generator structure and a new loss term, latent consistency loss, and shows that leveraging multiple inputs generally improves the visual quality of the translated images.

Equivariant Adversarial Network for Image-to-image Translation

A trainable transformer is used, which explicitly allows the spatial manipulation of data within training, and this differentiable module can be augmented into the convolutional layers in the generative model, and it allows to freely alter the generated distributions for image-to-image translation.

An Input-Perceptual Reconstruction Adversarial Network for Paired Image-to-Image Conversion

This work proposes a novel Input-Perceptual and Reconstruction Adversarial Network (IP-RAN) as an all-purpose framework for imperfect paired image-to-image conversion problems and demonstrates, through the experimental results, that this method significantly outperforms the current state-of-the-art techniques.



Generative Image Modeling Using Style and Structure Adversarial Networks

This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.

Generative Visual Manipulation on the Natural Image Manifold

This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.

Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts

A novel deep conditional generative adversarial network architecture that takes its strength from the semantic layout and scene attributes integrated as conditioning variables and is able to generate realistic outdoor scene images under different conditions, e.g. day-night, sunny-foggy, with clear object boundaries.

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.

Learning What and Where to Draw

This work proposes a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location, and shows high-quality 128 x 128 image synthesis on the Caltech-UCSD Birds dataset.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.

Generative Adversarial Text to Image Synthesis

A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels.

Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

Markovian Generative Adversarial Networks (MGANs) are proposed, a method for training generative networks for efficient texture synthesis that surpasses previous neural texture synthesizers by a significant margin and applies to texture synthesis, style transfer, and video stylization.

Conditional generative adversarial nets for convolutional face generation

An extension of generative adversarial networks (GANs) to a conditional setting is applied, and the likelihood of real-world faces under the generative model is evaluated, and how to deterministically control face attributes is examined.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.