Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

  title={Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks},
  author={Jun-Yan Zhu and Taesung Park and Phillip Isola and Alexei A. Efros},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. [] Key Method Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style…

Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks

The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two

Learning a Self-inverse Network for Unpaired Bidirectional Image-to-image Translation

This work proposes a self-inverse network learning approach for unpaired image-to-image translation by building on top of CycleGAN, and learns a selfinverse function by simply augmenting the training samples by switching inputs and outputs during training.

Unpaired Image-to-Image Translation using Adversarial Consistency Loss

This paper proposes a novel adversarial-consistency loss for image-to-image translation that does not require the translated image to be translated back to be a specific source image but can encourage the translated images to retain important features of the source images and overcome the drawbacks of cycle-consistsency loss.

One-to-one Mapping for Unpaired Image-to-image Translation

This work proposes a self-inverse network learning approach for unpaired image-to-image translation that reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.

Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation

A general framework for unsupervised image-to-image translation across multiple domains, which can translate images from domain X to any a domain without requiring direct training between the two domains involved in image translation.

Few-Shot Unsupervised Image-to-Image Translation

This model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design, and verifies the effectiveness of the proposed framework through extensive experimental validation and comparisons to several baseline methods on benchmark datasets.

Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation

A Dual Generator Generative Adversarial Network (G$^2$GAN) is proposed, which is a robust and scalable approach allowing to perform unpaired image-to-image translation for multiple domains using only dual generators within a single model.

In2I: Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks

This paper introduces a Generative Adversarial Network (GAN) based framework along with a multi-modal generator structure and a new loss term, latent consistency loss, and shows that leveraging multiple inputs generally improves the visual quality of the translated images.

An Optimized Architecture for Unpaired Image-to-Image Translation

  • Mohan Nikam
  • Computer Science
    International Conference on Advanced Computing Networking and Informatics
  • 2018
A new neural network architecture is introduced, which only learns the translation from domain A to B and eliminates the need for reverse mapping and contributes to significantly lesser training duration.

Cross-Domain Interpolation for Unpaired Image-to-Image Translation

This paper proposes a guided learning model through manifold bi-directional translation loops between the source and the target domains considering the Wasserstein distance between their probability distributions which guides the learning process and reduces the inducted error from loops.



DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Unsupervised Image-to-Image Translation Networks

This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets.

Learning from Simulated and Unsupervised Images through Adversarial Training

This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training.

Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins.

Generative Image Modeling Using Style and Structure Adversarial Networks

This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.

Coupled Generative Adversarial Networks

This work proposes coupled generative adversarial network (CoGAN), which can learn a joint distribution without any tuple of corresponding images, and applies it to several joint distribution learning tasks, and demonstrates its applications to domain adaptation and image transformation.

Unsupervised Cross-Domain Image Generation

The Domain Transfer Network (DTN) is presented, which employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves.

Learning Dense Correspondence via 3D-Guided Cycle Consistency

It is demonstrated that the end-to-end trained ConvNet supervised by cycle-consistency outperforms state-of-the-art pairwise matching methods in correspondence-related tasks.

Generative Visual Manipulation on the Natural Image Manifold

This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.