Corpus ID: 236078504

Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation

@inproceedings{Li2020LearningEG,
  title={Learning Efficient GANs for Image Translation via Differentiable Masks and co-Attention Distillation},
  author={Shaojie Li and Mingbao Lin and Yan Wang and Fei Chao and Xudong Mao and Mingliang Xu and Yongjian Wu and Feiyue Huang and Ling Shao and Rongrong Ji},
  year={2020}
}
Generative Adversarial Networks (GANs) have been widely-used in image translation, but their high computational and storage costs impede the deployment on mobile devices. Prevalent methods for CNN compression cannot be directly applied to GANs due to the complicated generator architecture and the unstable adversarial training. To solve these, in this paper, we introduce a novel GAN compression method, termed DMAD, by proposing a Differentiable Mask and a co-Attention Distillation. The former… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 42 REFERENCES
Co-Evolutionary Compression for Unpaired Image Translation
  • Han Shu, Yunhe Wang, +5 authors Chang Xu
  • Computer Science, Engineering
  • 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
TLDR
A novel co-evolutionary approach for reducing their memory usage and FLOPs simultaneously and synergistically optimized for investigating the most important convolution filters iteratively is developed. Expand
Distilling portable Generative Adversarial Networks for Image Translation
TLDR
Qualitative and quantitative analysis by conducting experiments on benchmark datasets demonstrate that the proposed method can learn portable generative models with strong performance. Expand
GAN Slimming: All-in-One GAN Compression by A Unified Optimization Framework
TLDR
This work proposes the first unified optimization framework combining multiple compression means for GAN compression, dubbed GAN Slimming (GS), which seamlessly integrates three mainstream compression techniques: model distillation, channel pruning and quantization, together with the GAN minimax objective, into one unified optimization form that can be efficiently optimized from end to end. Expand
Multiple Cycle-in-Cycle Generative Adversarial Networks for Unsupervised Image Super-Resolution
TLDR
This work proposes a multiple Cycle-in-Cycle network structure to deal with the more general case using multiple generative adversarial networks (GAN) as the basis components and achieves comparable performance with the state-of-the-art supervised models. Expand
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
Perceptual Adversarial Networks for Image-to-Image Transformation
TLDR
The perceptual adversarial loss is proposed, which undergoes an adversarial training process between the image transformation network and the discriminative network and can be trained alternately to solve image-to-image transformation tasks. Expand
GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium
TLDR
This work proposes a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN loss functions and introduces the "Frechet Inception Distance" (FID) which captures the similarity of generated images to real ones better than the Inception Score. Expand
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss. Expand
Analyzing and Improving the Image Quality of StyleGAN
TLDR
This work redesigns the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images, and thereby redefines the state of the art in unconditional image modeling. Expand
Feature-map-level Online Adversarial Knowledge Distillation
TLDR
This paper proposes an online knowledge distillation method that transfers not only the knowledge of the class probabilities but also that of the feature map using the adversarial training framework and proposes a novel cyclic learning scheme for training more than two networks together. Expand
...
1
2
3
4
5
...