• Corpus ID: 232035511

Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks

@article{Lee2021SelfDiagnosingGD,
  title={Self-Diagnosing GAN: Diagnosing Underrepresented Samples in Generative Adversarial Networks},
  author={Jinhee Lee and Haeri Kim and Youngkyu Hong and Hye Won Chung},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.12033}
}
Despite remarkable performance in producing realistic samples, Generative Adversarial Networks (GANs) often produce low-quality samples near low-density regions of the data manifold. Recently, many techniques have been developed to improve the quality of generated samples, either by rejecting low-quality samples after training or by pre-processing the empirical data distribution before training, but at the cost of reduced diversity. To guarantee both the quality and the diversity, we propose a… 
Don't Generate Me: Training Differentially Private Generative Models with Sinkhorn Divergence
TLDR
DP-Sinkhorn minimizes the Sinkhorn divergence, a computationally efficient approximation to the exact optimal transport distance, between the model and data in a differentially private manner and uses a novel technique for controlling the bias-variance trade-off of gradient estimates.
EditGAN: High-Precision Semantic Image Editing
Generative adversarial networks (GANs) have recently found applications in image editing. However, most GAN-based image editing methods often require large-scale datasets with semantic segmentation

References

SHOWING 1-10 OF 56 REFERENCES
PacGAN: The Power of Two Samples in Generative Adversarial Networks
TLDR
It is shown that packing naturally penalizes generators with mode collapse, thereby favoring generator distributions with less mode collapse during the training process, and numerical experiments suggests that packing provides significant improvements in practice as well.
Subsampling Generative Adversarial Networks: Density Ratio Estimation in Feature Space With Softplus Loss
TLDR
A novel Softplus (SP) loss for DRE is proposed, and a sample-based DRE method in a feature space learned by a specially designed and pre-trained ResNet-34, termed DRE-F-SP is developed, and the rate of convergence of a density ratio model trained under the SP loss is derived.
Mining GOLD Samples for Conditional GANs
TLDR
This work proposes three applications of the GOLD: example re-weighting, rejection sampling, and active learning, which improve the training, inference, and data selection of cGANs, respectively, and demonstrates that the proposed methods outperform corresponding baselines for all three applications on different image datasets.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Mimicry: Towards the Reproducibility of GAN Research
TLDR
Mimicry is introduced, a lightweight PyTorch library that provides implementations of popular state-of-the-art GANs and evaluation metrics to closely reproduce reported scores in the literature.
Metropolis-Hastings Generative Adversarial Networks
TLDR
The Metropolis-Hastings generative adversarial network (MH-GAN), which combines aspects of Markov chain Monte Carlo and GANs, is introduced, which uses the discriminator from GAN training to build a wrapper around the generator for improved sampling.
Self-supervised GAN: Analysis and Improvement with Multi-class Minimax Game
TLDR
An in-depth analysis is performed to understand how SS tasks interact with learning of generator and proposes new SS tasks based on a multi-class minimax game to address the catastrophic forgetting issue in the GAN discriminator.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
...
1
2
3
4
5
...