• Corpus ID: 252907351

Towards Device Efficient Conditional Image Generation

@inproceedings{Shah2022TowardsDE,
  title={Towards Device Efficient Conditional Image Generation},
  author={Nisarg A. Shah and Gaurav Bharaj},
  year={2022}
}
We present a novel algorithm to reduce tensor compute required by a conditional image generation autoencoder without sacrificing quality of photo-realistic image generation. Our method is device agnostic, and can optimize an autoencoder for a given CPU-only, GPU compute device(s) in about normal time it takes to train an autoencoder on a generic workstation. We achieve this via a two-stage novel strategy where, first, we condense the channel weights, such that, as few as possible channels are… 

References

SHOWING 1-10 OF 58 REFERENCES

GAN Compression: Efficient Architectures for Interactive Conditional GANs

A general-purpose compression framework for reducing the inference time and model size of the generator in cGANs and decouple the model training and architecture search via weight sharing is proposed.

Anycost GANs for Interactive Image Synthesis and Editing

This paper trains the Anycost GAN to support elastic resolutions and channels for faster image generation at versatile speeds and develops new encoder training and latent code optimization techniques to encourage consistency between the different sub-generators during image projection.

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Co-Evolutionary Compression for Unpaired Image Translation

  • Han ShuYunhe Wang Chang Xu
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
A novel co-evolutionary approach for reducing their memory usage and FLOPs simultaneously and synergistically optimized for investigating the most important convolution filters iteratively is developed.

A Style-Aware Content Loss for Real-time HD Style Transfer

A style-aware content loss is proposed, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos and results show that this approach better captures the subtle nature in which a style affects content.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Learning Efficient Convolutional Networks through Network Slimming

The approach is called network slimming, which takes wide and large networks as input models, but during training insignificant channels are automatically identified and pruned afterwards, yielding thin and compact models with comparable accuracy.

Image Style Transfer Using Convolutional Neural Networks

A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.

AutoGAN-Distiller: Searching to Compress Generative Adversarial Networks

Inspired by the recent success of AutoML in deep compression, AutoML is introduced to GAN compression and an AutoGAN-Distiller (AGD) framework is developed and yields remarkably lightweight yet more competitive compressed models, that largely outperform existing alternatives.

ThiNet: A Filter Level Pruning Method for Deep Neural Network Compression

ThiNet is proposed, an efficient and unified framework to simultaneously accelerate and compress CNN models in both training and inference stages, and it is revealed that it needs to prune filters based on statistics information computed from its next layer, not the current layer, which differentiates ThiNet from existing methods.
...