Collaborative Sampling in Generative Adversarial Networks

@inproceedings{Liu2020CollaborativeSI,
  title={Collaborative Sampling in Generative Adversarial Networks},
  author={Yuejiang Liu and Parth Kothari and Alexandre Alahi},
  booktitle={AAAI},
  year={2020}
}
The standard practice in Generative Adversarial Networks (GANs) discards the discriminator during sampling. However, this sampling method loses valuable information learned by the discriminator regarding the data distribution. In this work, we propose a collaborative sampling scheme between the generator and the discriminator for improved data generation. Guided by the discriminator, our approach refines the generated samples through gradient-based updates at a particular layer of the generator… Expand
Dual Rejection Sampling for Wasserstein Auto-Encoders
TLDR
A novel dual rejection sampling method is proposed to improve the performance of WAE on the generated samples in the sampling phase and corrects the generative prior by a discriminator based rejection sampling scheme in latent space and then rectifies the generated distribution by another discriminatorbased rejection sampling technique in data space. Expand
Efficient Subsampling for Generating High-Quality Images from Conditional Generative Adversarial Networks
TLDR
An efficient method called conditional density ratio estimation in feature space with conditional Softplus loss (cDRE-F-cSP) is proposed, which can subsample both class-conditional GANs and CcGANs efficiently and is compared with the state-of-the-art unconditional subsampling method. Expand
Next Steps for Image Synthesis using Semantic Segmentation
Image synthesis in the desired semantic can be used in many tasks of self-driving cars giving us the possibility to enhance existing challenging datasets by realistic-looking images which we do notExpand
Crowd-Robot Interaction: Crowd-Aware Robot Navigation With Attention-Based Deep Reinforcement Learning
TLDR
This work proposes to rethink pairwise interactions with a self-attention mechanism, and jointly model Human-Robot as well as Human-Human interactions in the deep reinforcement learning framework, and captures the Human- human interactions occurring in dense crowds that indirectly affects the robot’s anticipation capability. Expand
Social NCE: Contrastive Learning of Socially-aware Motion Representations
TLDR
This work introduces a social contrastive loss that encourages the encoded motion representation to preserve sufficient information for distinguishing a positive future event from a set of negative ones in order to incorporate negative data augmentation into motion representation learning. Expand

References

SHOWING 1-10 OF 73 REFERENCES
Metropolis-Hastings Generative Adversarial Networks
TLDR
The Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs, is introduced, which uses the discriminator from GAN training to build a wrapper around the generator for improved sampling. Expand
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing goodExpand
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Expand
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. Expand
Self-Supervised Generative Adversarial Networks
TLDR
This work exploits two popular unsupervised learning techniques, adversarial training and self-supervision, to close the gap between conditional and unconditional GANs and allows the networks to collaborate on the task of representation learning, while being adversarial with respect to the classic GAN game. Expand
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. Expand
Adversarial Feedback Loop
TLDR
A novel method is proposed that makes an explicit use of the discriminator in test-time, in a feedback manner in order to improve the generator results, and can contribute to both conditional and unconditional GANs. Expand
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality. Expand
VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning
TLDR
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples. Expand
...
1
2
3
4
5
...