Conservative Generator, Progressive Discriminator: Coordination of Adversaries in Few-shot Incremental Image Synthesis

@article{Kong2022ConservativeGP,
  title={Conservative Generator, Progressive Discriminator: Coordination of Adversaries in Few-shot Incremental Image Synthesis},
  author={Chaerin Kong and No Jun Kwak},
  journal={ArXiv},
  year={2022},
  volume={abs/2207.14491}
}
. In this work, we study the underrepresented task of generative incremental few-shot learning. To effectively handle the inherent challenges of incremental learning and few-shot learning, we propose a novel framework named ConPro that leverages the two-player nature of GANs. Specifically, we design a conservative generator that preserves past knowledge in a parameter- and compute-efficient manner, and a progressive discriminator that learns to reason semantic distances between past and present… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 22 REFERENCES

CAM-GAN: Continual Adaptation Modules for Generative Adversarial Networks

The proposed feature-map-transformation approach outperforms state-of-the-art methods for continually-learned GANs, with substantially fewer parameters and generates high-quality samples that can improve the generative-replay-based continual learning for discriminative tasks.

Smoothing the Generative Latent Space with Mixup-based Distance Learning

This work considers the situation where neither large scale dataset of the authors' interest nor transferable source dataset is available, and seeks to train existing generative models with minimal overfitting and mode collapse, and proposes latent mixup-based distance regularization on the feature space of both a generator and the counterpart discriminator that encourages the two players to reason not only about the scarce observed data points but the relative distances in the featurespace they reside.

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis

This paper proposes a light-weight GAN structure that gains superior quality on 1024 × 1024 resolution, and shows its superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.

Regularizing Generative Adversarial Networks under Limited Data

This work proposes a regularization approach for training robust GAN models on limited data and theoretically shows a connection between the regularized loss and an f-divergence called LeCam-Divergence, which is more robust under limited training data.

GenCo: Generative Co-training on Data-Limited Image Generation

This work designs GenCo, a Generative Co-training network that mitigates the discriminator over-fitting issue by introducing multiple complementary discriminators that provide diverse supervision from multiple distinctive views in training.

Prototypical Networks for Few-shot Learning

This work proposes Prototypical Networks for few-shot classification, and provides an analysis showing that some simple design decisions can yield substantial improvements over recent approaches involving complicated architectural choices and meta-learning.

Few-Shot Object Detection via Feature Reweighting

This work develops a few-shot object detector that can learn to detect novel objects from only a few annotated examples, using a meta feature learner and a reweighting module within a one-stage detection architecture.

Adversarial Generation of Continuous Images

This paper proposes two novel architectural techniques for building INR-based image decoders: factorized multiplicative modulation and multi-scale INRs, and uses them to build a state-of-the-art continuous image GAN.

Supervised Contrastive Learning

A novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations is proposed, and the batch contrastive loss is modified, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting.

A Simple Framework for Contrastive Learning of Visual Representations

It is shown that composition of data augmentations plays a critical role in defining effective predictive tasks, and introducing a learnable nonlinear transformation between the representation and the contrastive loss substantially improves the quality of the learned representations, and contrastive learning benefits from larger batch sizes and more training steps compared to supervised learning.