• Corpus ID: 244102982

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

@inproceedings{Jiang2021DeceiveDA,
  title={Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data},
  author={Liming Jiang and Bo Dai and Wayne Wu and Chen Change Loy},
  booktitle={Neural Information Processing Systems},
  year={2021}
}
Generative adversarial networks (GANs) typically require ample data for training in order to synthesize high-fidelity images. Recent studies have shown that training GANs with limited data remains formidable due to discriminator overfitting, the underlying cause that impedes the generator’s convergence. This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA) to encourage healthy competition between the generator and the discriminator. As an alternative method to… 

DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data

DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator’s prediction w.r.t. the generated samples, and DigGAN to significantly improve the results of GAN training when limited data is available.

Augmentation-Aware Self-Supervision for Data-Efficient GAN Training

A novel augmentation-aware self-supervised discriminator that predicts the parameter of augmentation given the augmented and original data is proposed that is able to distinguishable between real data and generated data since they are different during training.

Improving GANs with A Dynamic Discriminator

It is argued that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task, and the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional computation cost or training objectives.

A Comprehensive Survey on Data-Efficient GANs in Image Generation

This paper revisits and analyzes DE-GANs from the perspective of distribution optimization, and proposes a taxonomy, which classifies the existing methods into three categories: Data Selection, GANs Optimization, and Knowledge Sharing.

A Systematic Survey of Regularization and Normalization in GANs

A comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training is conducted and a new taxonomy is proposed based on these objectives, which are summarized on https://github.com/iceli1007/GANs-Regularization-Review.

StudioGAN: A Taxonomy and Benchmark of GANs for Image Synthesis

The taxonomy of GAN approaches is studied, a new open-source library named StudioGAN is presented, and representative GANs, including BigGAN, StyleGAN2, and StyleGAN3 are trained in a unified training pipeline and quantify generation performance with 7 evaluation metrics.

FakeCLR: Exploring Contrastive Learning for Solving Latent Discontinuity in Data-Efficient GANs

This paper proposes FakeCLR, which only applies contrastive learning on perturbed fake samples, and devises three related training techniques: Noise-related Latent Augmentation, Diversity-aware Queue, and Forgetting Factor of Queue.

Exploring The Effect of High-frequency Components in GANs Training

It is argued that the frequency gap is caused by the high-frequency sensitivity of the discriminator, which hinders the generator from fitting the low-frequency components that are important for learning images’ content, and proposes two simple yet effective image pre-processing operations in the frequency domain for eliminating the side effects caused by high- Frequency differences in GANs training.

Generator Knows What Discriminator Should Learn in Unconditional GANs

The efficacy of dense supervision in unconditional generation is explored and generator feature maps can be an alternative of cost-expensive semantic label maps and a new generator-guided discriminator regularization is proposed.

FreGAN: Exploiting Frequency Components for Training GANs under Limited Data

This paper proposes FreGAN, which raises the model’s frequency awareness and draws more attention to producing high-frequency signals, facilitating high-quality generation in the low-data regime.

References

SHOWING 1-10 OF 57 REFERENCES

Training Generative Adversarial Networks with Limited Data

It is demonstrated, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images, and is expected to open up new application domains for GANs.

Consistency Regularization for Generative Adversarial Networks

This work proposes a simple, effective training stabilizer based on the notion of consistency regularization, which improves state-of-the-art FID scores for conditional generation and achieves the best F ID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA.

Regularizing Generative Adversarial Networks under Limited Data

This work proposes a regularization approach for training robust GAN models on limited data and theoretically shows a connection between the regularized loss and an f-divergence called LeCam-Divergence, which is more robust under limited training data.

On Data Augmentation for GAN Training

This work argues that the classical DA approach could mislead the generator to learn the distribution of the augmented data, and proposes a principled framework, termed Data Augmentation Optimized for GAN (DAG), to enable the use of augmented data in GAN training to improve the learning of the original distribution.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

Stabilizing Training of Generative Adversarial Networks through Regularization

This work proposes a new regularization approach with low computational cost that yields a stable GAN training procedure and demonstrates the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks.

On Stabilizing Generative Adversarial Training With Noise

  • S. JenniP. Favaro
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
This work presents a novel method and analysis to train generative adversarial networks (GAN) in a stable manner by using different filtered versions of the real and generated data distributions, and proposes to learn the generation of samples so as to challenge the discriminator in the adversarial training.

Progressive Growing of GANs for Improved Quality, Stability, and Variation

A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

Detecting Overfitting of Deep Generative Networks via Latent Recovery

This work shows how simple losses are highly effective at reconstructing images for deep generators and analyzing the statistics of reconstruction errors for training versus validation images shows that pure GAN models appear to generalize well, in contrast with those using hybrid adversarial losses, which are amongst the most widely applied generative methods.
...