• Corpus ID: 59222842

QGAN: Quantized Generative Adversarial Networks

@article{Wang2019QGANQG,
  title={QGAN: Quantized Generative Adversarial Networks},
  author={Peiqi Wang and Dongsheng Wang and Yu Ji and Xinfeng Xie and Haoxuan Song and XuXin Liu and Yongqiang Lyu and Yuan Xie},
  journal={ArXiv},
  year={2019},
  volume={abs/1901.08263}
}
The intensive computation and memory requirements of generative adversarial neural networks (GANs) hinder its real-world deployment on edge devices such as smartphones. [] Key Method Motivated by these observations, we develop a novel quantization method for GANs based on EM algorithms, named as QGAN. We also propose a multi-precision algorithm to help find the optimal number of bits of quantized GAN models in conjunction with corresponding result qualities. Experiments on CIFAR-10 and CelebA show that QGAN…

Figures and Tables from this paper

Quantization of Generative Adversarial Networks for Efficient Inference: a Methodological Study
TLDR
An extensive experimental study of state-of-art quantization techniques on three diverse GAN architectures, namely StyleGAN, Self-Attention GAN, and CycleGAN discovered practical recipes that allowed them to successfully quantize these models for inference with 4/8-bit weights and 8-bit activations while preserving the quality of the original full-precision models.
Deep quantization generative networks
DGL-GAN: Discriminator Guided Learning for GAN Compression
TLDR
A novel yet simple Discriminator Guided Learning approach for compressing vanilla GAN, dubbed DGL-GAN, which is valid since empirically, learning from the teacher discriminator could facilitate the performance of student GANs and achieves state-of-the-art results.
A Survey on GAN Acceleration Using Memory Compression Technique
TLDR
The findings showed the superiority of knowledge distillation over pruning alone and the gaps in the research field that needs to be explored like encoding and different combination of compression techniques.
Evaluating Post-Training Compression in GANs using Locality-Sensitive Hashing
TLDR
It is shown that lowbit compression of several pre-trained GANs on multiple datasets induces a trade-off between precision and recall, retaining sample quality while sacrificing sample diversity.
GANs Can Play Lottery Tickets Too
TLDR
Extensive experimental results demonstrate that the found subnetworks substantially outperform previous state-of-the-art GAN compression approaches in both image generation and image-to-image translation GANs and show the powerful transferability of these subnetwork to unseen tasks.
Self-Supervised Generative Adversarial Compression
TLDR
This paper develops a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator, and shows that this framework has compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different compression granularities.
Don't Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
TLDR
This paper, for the first time, explores the possibility of directly training sparse GAN from scratch without involving any dense or pre-training steps and finds instead of inheriting parameters from expensive pre-trained GANs, directlyTraining sparse GAns from scratch can be a much more efficient solution.
Self-Supervised GAN Compression
TLDR
This paper develops a self-supervised compression technique which uses the trained discriminator to supervise the training of a compressed generator and shows that this framework has a compelling performance to high degrees of sparsity, can be easily applied to new tasks and models, and enables meaningful comparisons between different pruning granularities.
...
...

References

SHOWING 1-10 OF 24 REFERENCES
Least Squares Generative Adversarial Networks
TLDR
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Energy-based Generative Adversarial Network
We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and
Scalable Methods for 8-bit Training of Neural Networks
TLDR
This work is the first to quantize the weights, activations, as well as a substantial volume of the gradients stream, in all layers (including batch normalization) to 8-bit while showing state-of-the-art results over the ImageNet-1K dataset.
Trained Ternary Quantization
TLDR
This work proposes Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values to improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet.
HitNet: Hybrid Ternary Recurrent Neural Network
TLDR
The HitNet is proposed, a hybrid ternary recurrent neural network, which bridges the accuracy gap between the full precision model and the quantized model, and develops a hybrid quantization method to quantize weights and activations.
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
TLDR
This work introduces "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
...
...