Regularizing Generative Adversarial Networks under Limited Data

@article{Tseng2021RegularizingGA,
  title={Regularizing Generative Adversarial Networks under Limited Data},
  author={Hung-Yu Tseng and Lu Jiang and Ce Liu and Ming-Hsuan Yang and Weilong Yang},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={7917-7927}
}
Recent years have witnessed the rapid progress of generative adversarial networks (GANs). However, the success of the GAN models hinges on a large amount of training data. This work proposes a regularization approach for training robust GAN models on limited data. We theoretically show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data. Extensive experiments on several benchmark datasets demonstrate… 

GenCo: Generative Co-training for Generative Adversarial Networks with Limited Data

TLDR
This work designs GenCo, a Generative Co-training network that mitigates the discriminator over-fitting issue by introducing multiple complementary discriminators that provide diverse supervision from multiple distinctive views in training.

Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data

TLDR
This paper introduces a novel strategy called Adaptive Pseudo Augmentation (APA), which alleviates overfitting by employing the generator itself to augment the real data distribution with generated images, which deceives the discriminator adaptively.

A Systematic Survey of Regularization and Normalization in GANs

TLDR
A comprehensive survey on the regularization and normalization techniques from different perspectives of GANs training is conducted and a new taxonomy is proposed based on these objectives, which are compared to the performance of the mainstream methods on different datasets and investigate theRegularization andnormalization techniques that have been frequently employed in SOTA GAns.

Collapse by Conditioning: Training Class-conditional GANs with Limited Data

TLDR
A training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning and demonstrates outstanding results compared with state-of-the-art methods and established baselines.

Augmentation-Aware Self-Supervision for Data-Efficient GAN Training

TLDR
A novel augmentation-aware self-supervised discriminator that predicts the parameter of augmentation given the augmented and original data is proposed that is able to distinguishable between real data and generated data since they are different during training.

Data-Efficient GAN Training Beyond (Just) Augmentations: A Lottery Ticket Perspective

TLDR
This work suggests a brand-new angle towards data-efficient GAN training: identifying the lottery ticket from the original GAN using the small training set of real images; and then focusing on training that sparse subnetwork by re-using the same set.

GenCo: Generative Co-training on Data-Limited Image Generation

TLDR
This work designs GenCo, a Generative Co-training network that mitigates the discriminator over-fitting issue by introducing multiple complementary discriminators that provide diverse supervision from multiple distinctive views in training.

A Comprehensive Survey on Data-Efficient GANs in Image Generation

TLDR
This paper revisits and analyzes DE-GANs from the perspective of distribution optimization, and proposes a taxonomy, which classifies the existing methods into three categories: Data Selection, GANs Optimization, and Knowledge Sharing.

GraN-GAN: Piecewise Gradient Normalization for Generative Adversarial Networks

TLDR
Gradient Normalization (GraN), a novel input-dependent normalization method, which guarantees a piecewise K-Lipschitz constraint in the input space, and demonstrates improved image generation performance across multiple datasets, GAN loss functions, and metrics.

Implicit Data Augmentation Using Feature Interpolation for Diversified Low-Shot Image Generation

TLDR
This work views the discriminator as a metric embedding of the real data manifold, which offers proper distances between real data points, and utilizes information in the feature space to develop a data-driven augmentation method.

References

SHOWING 1-10 OF 85 REFERENCES

Training Generative Adversarial Networks with Limited Data

TLDR
It is demonstrated, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images, and is expected to open up new application domains for GANs.

Stabilizing Training of Generative Adversarial Networks through Regularization

TLDR
This work proposes a new regularization approach with low computational cost that yields a stable GAN training procedure and demonstrates the effectiveness of this regularizer accross several architectures trained on common benchmark image generation tasks.

Improved Training of Wasserstein GANs

TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Consistency Regularization for Generative Adversarial Networks

TLDR
This work proposes a simple, effective training stabilizer based on the notion of consistency regularization, which improves state-of-the-art FID scores for conditional generation and achieves the best F ID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA.

Differentiable Augmentation for Data-Efficient GAN Training

TLDR
DiffAugment is a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples, and can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms.

Mode Regularized Generative Adversarial Networks

TLDR
This work introduces several ways of regularizing the objective, which can dramatically stabilize the training of GAN models and shows that these regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.

Freeze Discriminator: A Simple Baseline for Fine-tuning GANs

TLDR
It is shown that simple fine-tuning of GANs with frozen lower layers of the discriminator performs surprisingly well, and a simple baseline, FreezeD, significantly outperforms previous techniques used in both unconditional and conditional GAns.

Label-Noise Robust Generative Adversarial Networks

TLDR
This work proposes a novel family of GANs called label-noise robust GGANs (rGANs), which, by incorporating a noise transition model, can learn a clean label conditional generative distribution even when training labels are noisy.

Towards Good Practices for Data Augmentation in GAN Training

TLDR
This work argues that the classical DA approach could mislead the generator to learn the distribution of the augmented data, and proposes a principled framework, termed Data Augmentation Optimized for GAN (DAG), to enable the use of augmented data in GAN training to improve the learning of the original distribution.

Improved Consistency Regularization for GANs

TLDR
This work shows that consistency regularization can introduce artifacts into the GAN samples and proposes several modifications to the consistencyRegularization procedure designed to improve its performance, and yields the best known FID scores on various GAN architectures.
...