Challenges in leveraging GANs for few-shot data augmentation

@article{Beckham2022ChallengesIL,
  title={Challenges in leveraging GANs for few-shot data augmentation},
  author={Christopher Beckham and Issam H. Laradji and Pau Rodr{\'i}guez L{\'o}pez and David V{\'a}zquez and Derek Nowrouzezahrai and Christopher Joseph Pal},
  journal={ArXiv},
  year={2022},
  volume={abs/2203.16662}
}
In this paper, we explore the use of GAN-based few-shot data augmentation as a method to improve few-shot classification performance. We perform an exploration into how a GAN can be fine-tuned for such a task (one of which is in a class-incremental manner), as well as a rigorous empirical investigation into how well these models can perform to improve few-shot classification. We identify issues related to the difficulty of training such generative models under a purely supervised regime with very… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 55 REFERENCES
Optimization as a Model for Few-Shot Learning
MetaGAN: An Adversarial Approach to Few-Shot Learning
TLDR
This paper proposes a conceptually simple and general framework called MetaGAN for few-shot learning problems, and shows that with this MetaGAN framework, it can extend supervised few- shot learning models to naturally cope with unlabeled data.
DAWSON: A Domain Adaptive Few Shot Generation Framework
TLDR
This work proposes DAWSON, a Domain Adaptive FewShot Generation Framework that supports a broad family of meta-learning algorithms and various GANs with architectural-variants, and proposes MUSIC MATINEE, which is the first few-shot music generation model.
Few-Shot Adaptation of Generative Adversarial Networks
TLDR
This paper proposes a simple and effective method, Few-Shot GAN (FSGAN), for adapting GANs in few-shot settings (less than 100 images), which repurposes component analysis techniques and learns to adapt the singular values of the pre-trained weights while freezing the corresponding singular vectors.
Generalizing from a Few Examples: A Survey on Few-Shot Learning
TLDR
A thorough survey to fully understand Few-Shot Learning (FSL), and categorizes FSL methods from three perspectives: data, which uses prior knowledge to augment the supervised experience; model, which used to reduce the size of the hypothesis space; and algorithm, which using prior knowledgeto alter the search for the best hypothesis in the given hypothesis space.
Learning to Compare: Relation Network for Few-Shot Learning
TLDR
A conceptually simple, flexible, and general framework for few-shot learning, where a classifier must learn to recognise new classes given only few examples from each, which is easily extended to zero- shot learning.
FIGR: Few-shot Image Generation with Reptile
TLDR
Initial results show that the proposed FIGR model can generalize to more advanced concepts from as few as 8 samples from a previously unseen class of images and as little as 10 training steps through those 8 images.
Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image Generation
TLDR
The Augmentation-Interpolative AutoEncoders synthesize realistic images of novel objects from only a few reference images, and outperform both prior interpolative models and supervised few-shot image generators.
Few-shot Generative Modelling with Generative Matching Networks
TLDR
This work develops a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and can instantly learn new concepts that were not available in the training data but conform to a similar generative process.
A Meta-Learning Framework for Generalized Zero-Shot Learning
TLDR
This paper proposes a meta-learning based generative model based on integrating model-agnostic meta learning with a Wasserstein GAN (WGAN) to handle $(i)$ and $(ii)$, and uses a novel task distribution to handle ($ii)$.
...
...