• Corpus ID: 211677731

Joint Wasserstein Distribution Matching

  title={Joint Wasserstein Distribution Matching},
  author={Jiezhang Cao and Langyuan Mo and Qing Du and Yong Guo and Peilin Zhao and Junzhou Huang and Mingkui Tan},
Joint distribution matching (JDM) problem, which aims to learn bidirectional mappings to match joint distributions of two domains, occurs in many machine learning and computer vision applications. This problem, however, is very difficult due to two critical challenges: (i) it is often difficult to exploit sufficient information from the joint distribution to conduct the matching; (ii) this problem is hard to formulate and optimize. In this paper, relying on optimal transport theory, we propose… 


Coupled Generative Adversarial Networks
This work proposes coupled generative adversarial network (CoGAN), which can learn a joint distribution without any tuple of corresponding images, and applies it to several joint distribution learning tasks, and demonstrates its applications to domain adaptation and image transformation.
Learning Generative Models with Sinkhorn Divergences
This paper presents the first tractable computational method to train large scale generative models using an optimal transport loss, and tackles three issues by relying on two key ideas: entropic smoothing, which turns the original OT loss into one that can be computed using Sinkhorn fixed point iterations; and algorithmic (automatic) differentiation of these iterations.
Harmonic Unpaired Image-to-image Translation
This paper develops HarmonicGAN to learn bi-directional translations between the source and the target domains, and turns CycleGAN from a failure to a success, halving the mean-squared error, and generating images that radiologists prefer over competing methods in 95% of cases.
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa).
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.
JointGAN: Multi-Domain Joint Distribution Learning with Generative Adversarial Nets
A new generative adversarial network is developed for joint distribution matching, which aims to learn a joint distribution of multiple random variables (domains) by learning to sample from conditional distributions between the domains, while simultaneously learning to samples from the marginals of each individual domain.
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.
Adversarial Learning with Local Coordinate Coding
This paper derives a generalization bound for LCC based GANs and proves that a small dimensional input is sufficient to achieve good generalization.
Auto-Encoding Variational Bayes
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
Wasserstein Auto-Encoders
The Wasserstein Auto-Encoder (WAE) is proposed---a new algorithm for building a generative model of the data distribution that shares many of the properties of VAEs (stable training, encoder-decoder architecture, nice latent manifold structure) while generating samples of better quality, as measured by the FID score.