Corpus ID: 57721163

FIGR: Few-shot Image Generation with Reptile

@article{Cloutre2019FIGRFI,
  title={FIGR: Few-shot Image Generation with Reptile},
  author={Louis Clou{\^a}tre and Marc Demers},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.02199}
}
Generative Adversarial Networks (GAN) boast impressive capacity to generate realistic images. However, like much of the field of deep learning, they require an inordinate amount of data to produce results, thereby limiting their usefulness in generating novelty. In the same vein, recent advances in meta-learning have opened the door to many few-shot learning applications. In the present work, we propose Few-shot Image Generation using Reptile (FIGR), a GAN meta-trained with Reptile. Our model… Expand
F2GAN: Fusing-and-Filling GAN for Few-shot Image Generation
TLDR
A Fusing-and-Filling Generative Adversarial Network (F2GAN) is proposed to generate realistic and diverse images for a new category with only a few images to ensure the diversity of generated images by a mode seeking loss and an interpolation regression loss. Expand
LoFGAN: Fusing Local Representations for Few-shot Image Generation
  • Zheng Gu, Wenbin Li, Jing Huo, Lei Wang, Yang Gao
Given only a few available images for a novel unseen category, few-shot image generation aims to generate more data for this category. Previous works attempt to globally fuse these images by usingExpand
Matchinggan: Matching-Based Few-Shot Image Generation
TLDR
This work proposes matching-based Generative Adversarial Network for few-shot generation, which includes a matching generator and a matching discriminator that extends conventional GAN discriminator by matching the feature of generated image with the fused feature of conditional images. Expand
Harnessing GAN with Metric Learning for One-Shot Generation on a Fine-Grained Category
TLDR
A metric learning with a triplet loss to the bottleneck layer of DAGAN to penalize a one-shot generation method on a fine-grained category, which represents a subclass of a category, typically with diverse examples. Expand
Semi Few-Shot Attribute Translation
TLDR
This work empirically demonstrates the potential of training a GAN for few shot image-to-image translation on hair color attribute synthesis tasks, opening the door to further research on generative transfer learning. Expand
MetaCGAN: A Novel GAN Model for Generating High Quality and Diversity Images with Few Training Data
TLDR
Experimental results on the MNIST, Fashion MNIST and CelebA data sets demonstrate the superiority of MetaCGAN over baseline models, and both qualitative and quantitative results show that the MetaNet module can learn prior knowledge and transfer it from the base classes to the new classes, which is beneficial for generating high quality and diversity images to thenew classes with few images. Expand
DAWSON: A Domain Adaptive Few Shot Generation Framework
TLDR
This work proposes DAWSON, a Domain Adaptive FewShot Generation Framework that supports a broad family of meta-learning algorithms and various GANs with architectural-variants, and proposes MUSIC MATINEE, which is the first few-shot music generation model. Expand
VCE: Variational Convertor-Encoder for One-Shot Generalization
TLDR
The algorithm that combines and improves the condition variational auto-encoder (CVAE) and introspective VAE, the proposed new framework aim to transform graphics instead of generating them is used for the one-shot generative process. Expand
Augmentation-Interpolative AutoEncoders for Unsupervised Few-Shot Image Generation
TLDR
The Augmentation-Interpolative AutoEncoders synthesize realistic images of novel objects from only a few reference images, and outperform both prior interpolative models and supervised few-shot image generators. Expand
Comparison of Deep Generative Models for the Generation of Handwritten Character Images
TLDR
Using the proposed model and meta-learning method, it is possible to produce not only images similar to the ones in the training set but also novel images that belong to a class which is seen for the first time. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 18 REFERENCES
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
Few-shot Generative Modelling with Generative Matching Networks
TLDR
This work develops a new generative model called Generative Matching Network which is inspired by the recently proposed matching networks for one-shot learning in discriminative tasks and can instantly learn new concepts that were not available in the training data but conform to a similar generative process. Expand
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning. Expand
Matching Networks for One Shot Learning
TLDR
This work employs ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories to learn a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. Expand
One-Shot Generalization in Deep Generative Models
TLDR
New deep generative models are developed, models that combine the representational power of deep learning with the inferential power of Bayesian reasoning, and are able to generate compelling and diverse samples, providing an important class of general-purpose models for one-shot machine learning. Expand
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
TLDR
This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities. Expand
Wasserstein Generative Adversarial Networks
TLDR
This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Expand
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and aExpand
One shot learning of simple visual concepts
TLDR
A generative model of how characters are composed from strokes is introduced, where knowledge from previous characters helps to infer the latent strokes in novel characters, using a massive new dataset of handwritten characters. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
...
1
2
...