• Corpus ID: 227276686

Few-shot Image Generation with Elastic Weight Consolidation

@article{Li2020FewshotIG,
  title={Few-shot Image Generation with Elastic Weight Consolidation},
  author={Yijun Li and Richard Zhang and Jingwan Lu and Eli Shechtman},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.02780}
}
Few-shot image generation seeks to generate more data of a given domain, with only few available training examples. As it is unreasonable to expect to fully infer the distribution from just a few observations (e.g., emojis), we seek to leverage a large, related source domain as pretraining (e.g., human faces). Thus, we wish to preserve the diversity of the source domain, while adapting to the appearance of the target. We adapt a pretrained model, without introducing any additional parameters… 

Figures and Tables from this paper

C: Contrastive Learning for Cross-domain Correspondence in Few-shot Image Generation
TLDR
This paper proposes a simple yet effective method C, Contrastive Learning for Cross-domain Correspondence, which constitutes the positive and negative pairs of images from two different domains and makes the generative model learn the cross-domain correspondence explicitly via contrastive learning.
Few-shot Image Generation via Cross-domain Correspondence
TLDR
This work seeks to utilize a large source domain for pretraining and transfer the diversity information from source to target and proposes to preserve the relative similarities and differences between instances in the source via a novel cross-domain distance consistency loss.
One-Shot Generative Domain Adaptation
TLDR
This work introduces an attribute adaptor into the generator yet freeze its original parameters, through which it can reuse the prior knowledge to the most extent and hence maintain the synthesis quality and diversity and substantially surpassing state-of-the-art alternatives, especially in terms of synthesis diversity.
LoFGAN: Fusing Local Representations for Few-shot Image Generation
TLDR
This work proposes a novel Local-Fusion Generative Adversarial Network (LoFGAN) for fewshot image generation, which matches local representations between the base and reference images based on semantic similarities, and replaces the local features with the closest related local features.
A Closer Look at Few-shot Image Generation
TLDR
This work proposes a framework to analyze existing methods during the adaptation of pretrained GANs on small target data, and discovers that while some methods succeed, others fail.
Few Shot Generative Model Adaption via Relaxed Spatial Structural Alignment
TLDR
This work proposes a relaxed spatial structural alignment (RSSA) method to calibrate the target generative models during the adaption and designs a cross-domain spatial structural consistency loss comprising the self-correlation and disturbance correlation consistency loss.
GAN Cocktail: mixing GANs without dataset access
TLDR
This work tackling the problem of model merging, given two constraints that often come up in the real world: no access to the original training data, and without increasing the size of the neural network, with a novel, two-stage solution.
ManiFest: Manifold Deformation for Few-shot Image Translation
TLDR
ManiFest is a framework for few-shot image translation that learns a context-aware representation of a target domain from a few images only, outperforming the state-of-the-art on all metrics and in both the general and exemplar-based scenarios.
Implicit Data Augmentation Using Feature Interpolation for Diversified Low-Shot Image Generation
TLDR
This work views the discriminator as a metric embedding of the real data manifold, which offers proper distances between real data points, and utilizes information in the feature space to develop a data-driven augmentation method.
Smoothing the Generative Latent Space with Mixup-based Distance Learning
TLDR
This work considers the situation where neither large scale dataset of the authors' interest nor transferable source dataset is available, and seeks to train existing generative models with minimal overfitting and mode collapse, and proposes latent mixup-based distance regularization on the feature space of both a generator and the counterpart discriminator that encourages the two players to reason not only about the scarce observed data points but the relative distances in the featurespace they reside.
...
1
2
3
4
...

References

SHOWING 1-10 OF 52 REFERENCES
Image Generation From Small Datasets via Batch Statistics Adaptation
TLDR
This work proposes a new method for transferring prior knowledge of the pre-trained generator, which is trained with a large dataset, to a small dataset in a different domain, and can generate higher quality images compared to previous methods without collapsing.
Transferring GANs: generating images from limited data
TLDR
The results show that using knowledge from pretrained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when the target data is limited and it is suggested that density may be more important than diversity.
Few-Shot Unsupervised Image-to-Image Translation
TLDR
This model achieves this few-shot generation capability by coupling an adversarial training scheme with a novel network design, and verifies the effectiveness of the proposed framework through extensive experimental validation and comparisons to several baseline methods on benchmark datasets.
MineGAN: Effective Knowledge Transfer From GANs to Target Domains With Few Images
TLDR
This work proposes a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs, and shows that it effectively transfers knowledge to domains with few target images, outperforming existing methods.
Few-shot Autoregressive Density Estimation: Towards Learning to Learn Distributions
TLDR
This paper shows how 1) neural attention and 2) meta learning techniques can be used in combination with autoregressive models to enable effective few-shot density estimation on the Omniglot dataset.
Unsupervised Cross-Domain Image Generation
TLDR
The Domain Transfer Network (DTN) is presented, which employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves.
StarGAN v2: Diverse Image Synthesis for Multiple Domains
TLDR
StarGAN v2, a single framework that tackles image-to-image translation models with limited diversity and multiple models for all domains, is proposed and shows significantly improved results over the baselines.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
Few-Shot Adversarial Learning of Realistic Neural Talking Head Models
TLDR
This work presents a system that performs lengthy meta-learning on a large dataset of videos, and is able to frame few- and one-shot learning of neural talking head models of previously unseen people as adversarial training problems with high capacity generators and discriminators.
...
1
2
3
4
5
...