• Corpus ID: 53018855

Discriminator Rejection Sampling

@article{Azadi2018DiscriminatorRS,
  title={Discriminator Rejection Sampling},
  author={Samaneh Azadi and Catherine Olsson and Trevor Darrell and Ian J. Goodfellow and Augustus Odena},
  journal={ArXiv},
  year={2018},
  volume={abs/1810.06758}
}
We propose a rejection sampling scheme using the discriminator of a GAN to approximately correct errors in the GAN generator distribution. [] Key Method We then examine where those strict assumptions break down and design a practical algorithm - called Discriminator Rejection Sampling (DRS) - that can be used on real data-sets. Finally, we demonstrate the efficacy of DRS on a mixture of Gaussians and on the SAGAN model, state-of-the-art in the image generation task at the time of developing this work. On…

Latent reweighting, an almost free improvement for GANs

A line of works aims at improving the sampling quality from pre-trained generators at the expense of increased computational cost by introducing an additional network to predict latent importance weights and two associated sampling methods to avoid the poorest samples.

Freeze Discriminator: A Simple Baseline for Fine-tuning GANs

It is shown that simple fine-tuning of GANs with frozen lower layers of the discriminator performs surprisingly well, and a simple baseline, FreezeD, significantly outperforms previous techniques used in both unconditional and conditional GAns.

Refining Deep Generative Models via Discriminator Gradient Flow

Empirical results demonstrate that DGf low leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) anddiscriminator Driven Latent Sampling (DDLS) methods.

Dual Rejection Sampling for Wasserstein Auto-Encoders

A novel dual rejection sampling method is proposed to improve the performance of WAE on the generated samples in the sampling phase and corrects the generative prior by a discriminator based rejection sampling scheme in latent space and then rectifies the generated distribution by another discriminatorbased rejection sampling technique in data space.

Collaborative Sampling in Generative Adversarial Networks

This work proposes a collaborative sampling scheme between the generator and the discriminator for improved data generation that can improve generated samples both quantitatively and qualitatively, offering a new degree of freedom in GAN sampling.

Noise Space Optimization for GANs

Noise space optimization, a novel sampling procedure alternating the traditional approach of optimizing the generator while holding the noise space sample constant, but then also moving points in the noisespace to improve the loss, using the direction of the gradient, while hold the generator parameters constant.

Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad Samples

A simple modification to the Generative Adversarial Network (GAN) training algorithm that materially improves results with no increase in computational cost is introduced: when updating the generator parameters, the gradient contributions from the elements of the batch that the critic scores as `least realistic' are zeroed out.

Your GAN is Secretly an Energy-based Model and You Should use Discriminator Driven Latent Sampling

Discriminator Driven Latent Sampling is shown to be highly efficient compared to previous methods which work in the high-dimensional pixel space and can be applied to improve on previously trained GANs of many types and achieves a new state-of-the-art in unconditional image synthesis setting without introducing extra parameters or additional training.

The Implicit Metropolis-Hastings Algorithm

This work introduces the implicit Metropolis-Hastings algorithm, and presents a theoretical result stating that the discriminator loss upper bounds the total variation distance between the target distribution and the stationary distribution.

Refining Deep Generative Models via Wasserstein Gradient Flows

Empirical results demonstrate that DGflow leads to significant improvement in the quality of generated samples for a variety of generative models, outperforming the state-of-the-art Discriminator Optimal Transport (DOT) and Discriminators Driven Latent Sampling (DDLS) methods.
...

References

SHOWING 1-10 OF 34 REFERENCES

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

cGANs with Projection Discriminator

With this modification, the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset is significantly improved and the application to super-resolution was extended and succeeded in producing highly discriminative super- resolution images.

Self-Attention Generative Adversarial Networks

The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.

Do GANs actually learn the distribution? An empirical study

Empirical evidence is presented that well-known GANs approaches do learn distributions of fairly low support, and thus presumably are not learning the target distribution when discriminator has finite size.

Improved Training of Wasserstein GANs

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium

This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning

VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.

Improved Semi-supervised Learning with GANs using Manifold Invariances

This work proposes to estimate the tangent space to the data manifold using GANs and use it to inject invariances into the classifier and proposes improvements over existing methods for learning the inverse mapping (i.e., the encoder) which greatly improve in terms of semantic similarity of reconstructed sample to the input sample.

Least Squares Generative Adversarial Networks

This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.