Efficient Conditional GAN Transfer with Knowledge Propagation across Classes

@article{Shahbazi2021EfficientCG,
  title={Efficient Conditional GAN Transfer with Knowledge Propagation across Classes},
  author={Mohamad Shahbazi and Zhiwu Huang and Danda Pani Paudel and Ajad Chhatkuli and Luc Van Gool},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2021},
  pages={12162-12171}
}
Generative adversarial networks (GANs) have shown impressive results in both unconditional and conditional image generation. In recent literature, it is shown that pre-trained GANs, on a different dataset, can be transferred to improve the image generation from a small target data. The same, however, has not been well-studied in the case of conditional GANs (cGANs), which provides new opportunities for knowledge transfer compared to unconditional setup. In particular, the new classes may borrow… 

Figures and Tables from this paper

Generative Adversarial Network and Its Application in Energy Internet

  • Zeqing Xiao
  • Computer Science
    Mathematical Problems in Engineering
  • 2022
TLDR
The framework, advantages, disadvantages, and improvement of classic GAN are introduced, and the possible application of GAN in EI is prospected.

A Survey of Learning on Small Data

TLDR
This survey follows the agnostic active sampling under a PAC (Probably Approximately Correct) framework to analyze the generalization error and label complexity of learning on small data using a supervised and unsupervised fashion.

Few-Shot Incremental Learning for Label-to-Image Translation

TLDR
A few-shot incremental learning method for label-to-image translation that outperforms existing related methods in most cases and achieves zero forgetting, and a modulation transfer strategy for better initializa-tion is proposed.

Learning to Memorize Feature Hallucination for One-Shot Image Generation

TLDR
A novel model to explicitly learn and memorize reusable features that can help hallucinate novel category images is proposed that effectively boosts the OSG performance and can generate compelling and diverse samples.

Egocentric Early Action Prediction via Adversarial Knowledge Distillation

TLDR
This paper proposes a novel multi-modal adversarial knowledge distillation framework that seamlessly integrate the adversarial learning with latent and discriminative knowledge regularizations encouraging the learned representations of the partial video to be more informative and discrim inative towards the action prediction.

A Continual Deepfake Detection Benchmark: Dataset, Methods, and Essentials

TLDR
A continual deepfake detection benchmark (CDDB) over a new collection of deepfakes from both known and unknown generative models is suggested, which is clearly more challenging than the existing benchmarks and offers a suitable evaluation avenue to the future research.

Arbitrary-Scale Image Synthesis

TLDR
This work proposes the design of scale-consistent positional encodings invariant to the generator’s layers transformations that enables the generation of arbitraryscale images even at scales unseen during training.

Collapse by Conditioning: Training Class-conditional GANs with Limited Data

TLDR
A training strategy for conditional GANs (cGANs) that effectively prevents the observed mode-collapse by leveraging unconditional learning and demonstrates outstanding results compared with state-of-the-art methods and established baselines.

CSG0: Continual Urban Scene Generation with Zero Forgetting

TLDR
A novel framework is introduced that not only enables seamless knowledge transfer in continual training but also guarantees zero forgetting with a small overhead cost and obtains better synthesis quality as compared against the brute-force solution that trains one full model for each domain.

Transferring Unconditional to Conditional GANs with Hyper-Modulation

TLDR
This paper proposes hyper-modulated generative networks that allow for shared and complementary supervision, and introduces a self-initialization procedure that does not require any real data to initialize the hypernetwork parameters.

References

SHOWING 1-10 OF 48 REFERENCES

Transferring GANs: generating images from limited data

TLDR
The results show that using knowledge from pretrained networks can shorten the convergence time and can significantly improve the quality of the generated images, especially when the target data is limited and it is suggested that density may be more important than diversity.

MineGAN: Effective Knowledge Transfer From GANs to Target Domains With Few Images

TLDR
This work proposes a novel knowledge transfer method for generative models based on mining the knowledge that is most beneficial to a specific target domain, either from a single or multiple pretrained GANs, and shows that it effectively transfers knowledge to domains with few target images, outperforming existing methods.

On Leveraging Pretrained GANs for Limited-Data Generation

TLDR
It is revealed that low-level filters of both the generator and discriminator of pretrained GANs can be transferred to facilitate generation in a perceptually-distinct target domain with limited training data.

Self-Attention Generative Adversarial Networks

TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.

Differentiable Augmentation for Data-Efficient GAN Training

TLDR
DiffAugment is a simple method that improves the data efficiency of GANs by imposing various types of differentiable augmentations on both real and fake samples, and can generate high-fidelity images using only 100 images without pre-training, while being on par with existing transfer learning algorithms.

cGANs with Projection Discriminator

TLDR
With this modification, the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset is significantly improved and the application to super-resolution was extended and succeeded in producing highly discriminative super- resolution images.

Image Generation From Small Datasets via Batch Statistics Adaptation

TLDR
This work proposes a new method for transferring prior knowledge of the pre-trained generator, which is trained with a large dataset, to a small dataset in a different domain, and can generate higher quality images compared to previous methods without collapsing.

Freeze Discriminator: A Simple Baseline for Fine-tuning GANs

TLDR
It is shown that simple fine-tuning of GANs with frozen lower layers of the discriminator performs surprisingly well, and a simple baseline, FreezeD, significantly outperforms previous techniques used in both unconditional and conditional GAns.

Adversarial Discriminative Domain Adaptation

TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.

Training Generative Adversarial Networks with Limited Data

TLDR
It is demonstrated, on several datasets, that good results are now possible using only a few thousand training images, often matching StyleGAN2 results with an order of magnitude fewer images, and is expected to open up new application domains for GANs.