Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks

@article{Zhu2017UnpairedIT,
  title={Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks},
  author={Jun-Yan Zhu and Taesung Park and Phillip Isola and Alexei A. Efros},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={2242-2251}
}
Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. [...] Key Method Because this mapping is highly under-constrained, we couple it with an inverse mapping F : Y → X and introduce a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style…Expand
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to twoExpand
Learning a Self-inverse Network for Unpaired Bidirectional Image-to-image Translation
TLDR
This work proposes a self-inverse network learning approach for unpaired image-to-image translation by building on top of CycleGAN, and learns a selfinverse function by simply augmenting the training samples by switching inputs and outputs during training. Expand
One-to-one Mapping for Unpaired Image-to-image Translation
TLDR
This work proposes a self-inverse network learning approach for unpaired image-to-image translation that reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation. Expand
Crossing-Domain Generative Adversarial Networks for Unsupervised Multi-Domain Image-to-Image Translation
TLDR
A general framework for unsupervised image-to-image translation across multiple domains, which can translate images from domain X to any a domain without requiring direct training between the two domains involved in image translation. Expand
Few-Shot Unsupervised Image-to-Image Translation
  • Ming-Yu Liu, Xun Huang, +4 authors J. Kautz
  • Computer Science, Mathematics
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
This model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design, and verifies the effectiveness of the proposed framework through extensive experimental validation and comparisons to several baseline methods on benchmark datasets. Expand
Dual Generator Generative Adversarial Networks for Multi-Domain Image-to-Image Translation
TLDR
A Dual Generator Generative Adversarial Network (G$^2$GAN) is proposed, which is a robust and scalable approach allowing to perform unpaired image-to-image translation for multiple domains using only dual generators within a single model. Expand
In2I: Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks
TLDR
This paper introduces a Generative Adversarial Network (GAN) based framework along with a multi-modal generator structure and a new loss term, latent consistency loss, and shows that leveraging multiple inputs generally improves the visual quality of the translated images. Expand
An Optimized Architecture for Unpaired Image-to-Image Translation
  • Mohan Nikam
  • Computer Science
  • International Conference on Advanced Computing Networking and Informatics
  • 2018
TLDR
A new neural network architecture is introduced, which only learns the translation from domain A to B and eliminates the need for reverse mapping and contributes to significantly lesser training duration. Expand
Cross-Domain Interpolation for Unpaired Image-to-Image Translation
TLDR
This paper proposes a guided learning model through manifold bi-directional translation loops between the source and the target domains considering the Wasserstein distance between their probability distributions which guides the learning process and reduces the inducted error from loops. Expand
Show, Attend, and Translate: Unsupervised Image Translation With Self-Regularization and Attention
TLDR
This work constrain the problem with the assumption that the translated image needs to be perceptually similar to the original image and also appears to be drawn from the new domain, and proposes a simple yet effective image translation model consisting of a single generator trained with a self-regularization term and an adversarial term. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 83 REFERENCES
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
TLDR
A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data. Expand
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
Unsupervised Image-to-Image Translation Networks
TLDR
This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets. Expand
Learning from Simulated and Unsupervised Images through Adversarial Training
TLDR
This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training. Expand
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Expand
Generative Image Modeling Using Style and Structure Adversarial Networks
TLDR
This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. Expand
Coupled Generative Adversarial Networks
TLDR
This work proposes coupled generative adversarial network (CoGAN), which can learn a joint distribution without any tuple of corresponding images, and applies it to several joint distribution learning tasks, and demonstrates its applications to domain adaptation and image transformation. Expand
Unsupervised Cross-Domain Image Generation
TLDR
The Domain Transfer Network (DTN) is presented, which employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves. Expand
Learning Dense Correspondence via 3D-Guided Cycle Consistency
TLDR
It is demonstrated that the end-to-end trained ConvNet supervised by cycle-consistency outperforms state-of-the-art pairwise matching methods in correspondence-related tasks. Expand
Generative Visual Manipulation on the Natural Image Manifold
TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times. Expand
...
1
2
3
4
5
...