Reparameterized Sampling for Generative Adversarial Networks

@inproceedings{Wang2021ReparameterizedSF,
  title={Reparameterized Sampling for Generative Adversarial Networks},
  author={Yifei Wang and Yisen Wang and Jiansheng Yang and Zhouchen Lin},
  booktitle={ECML/PKDD},
  year={2021}
}
Recently, sampling methods have been successfully applied to enhance the sample quality of Generative Adversarial Networks (GANs). However, in practice, they typically have poor sample efficiency because of the independent proposal sampling from the generator. In this work, we propose REP-GAN, a novel sampling method that allows general dependent proposals by REParameterizing the Markov chains into the latent space of the generator. Theoretically, we show that our reparameterized proposal… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 29 REFERENCES
Metropolis-Hastings Generative Adversarial Networks
TLDR
The Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs, is introduced, which uses the discriminator from GAN training to build a wrapper around the generator for improved sampling. Expand
Unrolled Generative Adversarial Networks
TLDR
This work introduces a method to stabilize Generative Adversarial Networks by defining the generator objective with respect to an unrolled optimization of the discriminator, and shows how this technique solves the common problem of mode collapse, stabilizes training of GANs with complex recurrent generators, and increases diversity and coverage of the data distribution by the generator. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. Expand
A-NICE-MC: Adversarial Training for MCMC
TLDR
A-NICE-MC provides the first framework to automatically design efficient domain-specific Markov Chain Monte Carlo proposals, and is able to significantly outperform competing methods such as Hamiltonian Monte Carlo. Expand
Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling by Exploring Energy of the Discriminator
TLDR
The Discriminator Contrastive Divergence is introduced, which is well motivated by the property of WGAN's discriminator and the relationship between WGAN and energy-based model and the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
Discriminator optimal transport
  • A. Tanaka
  • Computer Science, Mathematics
  • NeurIPS
  • 2019
TLDR
Based on some experiments and a bit of OT theory, this work proposes discriminator optimal transport (DOT) scheme to improve generated images and shows that it improves inception score and FID calculated by un-conditional GAN trained by CIFAR-10, STL-10 and a public pre-trained model of conditional GANtrained by ImageNet. Expand
NIPS 2016 Tutorial: Generative Adversarial Networks
TLDR
This report summarizes the tutorial presented by the author at NIPS 2016 on generative adversarial networks (GANs), and describes state-of-the-art image models that combine GANs with other methods. Expand
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input. Expand
...
1
2
3
...