• Corpus ID: 3586592

Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data

  title={Augmented CycleGAN: Learning Many-to-Many Mappings from Unpaired Data},
  author={Amjad Almahairi and Sai Rajeswar and Alessandro Sordoni and Philip Bachman and Aaron C. Courville},
Learning inter-domain mappings from unpaired data can improve performance in structured prediction tasks, such as image segmentation, by reducing the need for paired data. CycleGAN was recently proposed for this problem, but critically assumes the underlying inter-domain mapping is approximately deterministic and one-to-one. This assumption renders the model ineffective for tasks requiring flexible, many-to-many mappings. We propose a new model, called Augmented CycleGAN, which learns many-to… 
DAGAN: A Domain-Aware Method for Image-to-Image Translations
A generative framework DAGAN (Domain-aware Generative Adversarial etwork) is proposed that enables domains to learn diverse mapping relationships and integrated the translated domains into a complete image with smoothed labels to maintain realism.
Image-to-image Mapping with Many Domains by Sparse Attribute Transfer
It is demonstrated that image-to-image domain translation with many different domains can be learned more effectively with the architecturally constrained, simple transformation than with previous unconstrained architectures that rely on a cycle-consistency loss.
One-to-one Mapping for Unpaired Image-to-image Translation
This work proposes a self-inverse network learning approach for unpaired image-to-image translation that reaches the state-of-the-art result on the cityscapes benchmark dataset for the label to photo un-paired directional image translation.
Asymmetric GAN for Unpaired Image-to-Image Translation
This work proposes Asymmetric GAN (AsymGAN) to adapt the asymmetric domains by introducing an auxiliary variable (aux) to learn the extra information for transferring from the information-poor domain to the Information-rich domain, which improves the performance of state-of-the-art approaches in the following ways.
Learning a Self-inverse Network for Unpaired Bidirectional Image-to-image Translation
This work proposes a self-inverse network learning approach for unpaired image-to-image translation by building on top of CycleGAN, and learns a selfinverse function by simply augmenting the training samples by switching inputs and outputs during training.
Diverse Image-to-Image Translation via Disentangled Representations
This work presents an approach based on disentangled representation for producing diverse outputs without paired training images, and proposes to embed images onto two spaces: a domain-invariant content space capturing shared information across domains and adomain-specific attribute space.
Semi-Supervised Learning of Disentangled Representations for Cross-Modal Translation
This paper proposes a semi-supervised learning approach to cross-modal translation tasks that fully exploits extra data from the target domain that achieves state-of-the-art translation results.
Contrastive Learning for Unpaired Image-to-Image Translation
The framework enables one-sided translation in the unpaired image-to-image translation setting, while improving quality and reducing training time, and can be extended to the training setting where each "domain" is only a single image.
An Asymmetric Cycle-Consistency Loss For Dealing With Many-To-One Mappings In Image Translation: A Study On Thigh Mr Scans
The proposed method improves the performance without radically changing the architecture and increasing the model complexity in case of many-to-one mappings by modifying the cycle-consistency loss.
Multi-Domain Translation by Learning Uncoupled Autoencoders
This work shows that the problem of computing a probabilistic coupling between marginals is equivalent to learning multiple uncoupled autoencoders that embed to a given shared latent distribution, and proposes a new framework and algorithm for multi-domain translation based on learning the share latent distribution and training autoen coders under distributional constraints.


DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.
XGAN: Unsupervised Image-to-Image Translation for many-to-many Mappings
XGAN ("Cross-GAN"), a dual adversarial autoencoder, is introduced, which captures a shared representation of the common domain semantic content in an unsupervised way, while jointly learning the domain-to-domain image translations in both directions.
Learning to Discover Cross-Domain Relations with Generative Adversarial Networks
This work proposes a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN) and successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.
Adversarial Inverse Graphics Networks: Learning 2D-to-3D Lifting and Image-to-Image Translation from Unpaired Supervision
Adversarial Inverse Graphics networks (AIGNs) are proposed, weakly supervised neural network models that combine feedback from rendering their predictions, with distribution matching between their predictions and a collection of ground-truth factors, and outperform models supervised by only paired annotations.
CyCADA: Cycle-Consistent Adversarial Domain Adaptation
A novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model that adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs is proposed.
Unsupervised Image-to-Image Translation Networks
This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets.
Toward Multimodal Image-to-Image Translation
This work aims to model a distribution of possible outputs in a conditional generative modeling setting that helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse.
Coupled Generative Adversarial Networks
This work proposes coupled generative adversarial network (CoGAN), which can learn a joint distribution without any tuple of corresponding images, and applies it to several joint distribution learning tasks, and demonstrates its applications to domain adaptation and image transformation.
Fine-Grained Visual Comparisons with Local Learning
  • A. Yu, K. Grauman
  • Computer Science
    2014 IEEE Conference on Computer Vision and Pattern Recognition
  • 2014
This work proposes a local learning approach for fine-grained visual comparisons that outperforms state-of-the-art methods for relative attribute prediction and shows how to identify analogous pairs using learned metrics.
PixelVAE: A Latent Variable Model for Natural Images
Natural image modeling is a landmark challenge of unsupervised learning. Variational Autoencoders (VAEs) learn a useful latent representation and model global structure well but have difficulty