Corpus ID: 9176830

Soft-to-Hard Vector Quantization for End-to-End Learned Compression of Images and Neural Networks

@article{Agustsson2017SofttoHardVQ,
  title={Soft-to-Hard Vector Quantization for End-to-End Learned Compression of Images and Neural Networks},
  author={E. Agustsson and Fabian Mentzer and M. Tschannen and L. Cavigelli and R. Timofte and L. Benini and L. Gool},
  journal={ArXiv},
  year={2017},
  volume={abs/1704.00648}
}
In this work we present a new approach to learn compressible representations in deep architectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two challenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization… Expand
End-To-End Optimized Speech Coding with Deep Neural Networks
  • Srihari Kankanahalli
  • Computer Science, Engineering
  • 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
  • 2018
TLDR
A deep neural network model is presented which optimizes all the steps of a wideband speech coding pipeline end-to-end directly from raw speech data - no manual feature engineering necessary, and it trains in hours. Expand
Learning Convolutional Networks for Content-Weighted Image Compression
TLDR
The bit rate of the different parts of the image is adapted to local content, and the content-aware bit rate is allocated under the guidance of a content-weighted importance map so that the sum of the importance map can serve as a continuous alternative of discrete entropy estimation to control compression rate. Expand
Towards Image Understanding from Deep Compression without Decoding
TLDR
This study shows that accuracies comparable to networks that operate on compressed RGB images can be achieved while reducing the computational complexity up to $2\times, and finds that inference from compressed representations is particularly advantageous compared to inference from compression RGB images for aggressive compression rates. Expand
Deep Optimized Multiple Description Image Coding via Scalar Quantization Learning
TLDR
A deep multiple description coding (MDC) framework optimized by minimizing multiple description (MD) compressive loss that performs better than several state-of-the-art MDC approaches regarding image coding efficiency when tested on several commonly available datasets. Expand
Deep Multiple Description Coding by Learning Scalar Quantization
TLDR
A deep multiple description coding framework, whose quantizers are adaptively learned via the minimization of multiple description compressive loss, which is beyond several state-of-the-art multipledescription coding approaches in terms of coding efficiency. Expand
Learned Image Compression with Frequency Domain Loss
TLDR
This model shows better image compression performance when measuring visual quality using the peak signal-to-noise ratio, and its rate-distortion performance outperformed traditional neural-network-based models when the model was trained jointly in the frequency domain. Expand
An End-to-End Joint Learning Scheme of Image Compression and Quality Enhancement with Improved Entropy Minimization.
TLDR
This paper proposes a novel joint learning scheme of image compression and quality enhancement, called JointIQ-Net, which achieves a remarkable performance improvement in coding efficiency in terms of both PSNR and MS-SSIM, compared to the previous learned image compression methods and the conventional codecs. Expand
Joint learned and traditional image compression for transparent coding
This paper proposes a novel image compression framework, which consists of a CNN-based method and a versatile video coding (VVC) based method. The CNN-based method uses the auto-encoder to learn theExpand
Learned Iterative Decoding for Lossy Image Compression Systems
TLDR
This work proposes a recurrent neural network approach for nonlinear, iterative decoding for lossy image compression systems, and develops an algorithm called iterative refinement, to improve the decoder's reconstruction compared with standard decoding techniques. Expand
BitNet: Bit-Regularized Deep Neural Networks
TLDR
A novel end-to-end approach that circumvents the discrete parameter space by optimizing a relaxed continuous and differentiable upper bound of the typical classification loss function and shows that BitNet converges faster to a superior quality solution. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 47 REFERENCES
Soft Weight-Sharing for Neural Network Compression
TLDR
This paper shows that competitive compression rates can be achieved by using a version of  “soft weight-sharing” (Nowlan & Hinton, 1992) and achieves both quantization and pruning in one simple (re-)training procedure, exposing the relation between compression and the minimum description length (MDL) principle. Expand
Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights
TLDR
Extensive experiments on the ImageNet classification task using almost all known deep CNN architectures including AlexNet, VGG-16, GoogleNet and ResNets well testify the efficacy of the proposed INQ, showing that at 5-bit quantization, models have improved accuracy than the 32-bit floating-point references. Expand
End-to-end optimization of nonlinear transform codes for perceptual quality
TLDR
This work introduces a general framework for end-to-end optimization of the rate-distortion performance of nonlinear transform codes assuming scalar quantization and considers a code built from a linear transform followed by a form of multi-dimensional local gain control. Expand
Lossy Image Compression with Compressive Autoencoders
TLDR
It is shown that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs, and furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images. Expand
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
TLDR
This work introduces "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Expand
End-to-end Optimized Image Compression
TLDR
Across an independent set of test images, it is found that the optimized method generally exhibits better rate-distortion performance than the standard JPEG and JPEG 2000 compression methods, and a dramatic improvement in visual quality is observed, supported by objective quality estimates using MS-SSIM. Expand
Full Resolution Image Compression with Recurrent Neural Networks
TLDR
This is the first neural network architecture that is able to outperform JPEG at image compression across most bitrates on the rate-distortion curve on the Kodak dataset images, with and without the aid of entropy coding. Expand
Towards the Limit of Network Quantization
TLDR
It is derived that the network quantization problem can be related to the entropy-constrained scalar quantization (ECSQ) problem in information theory and two solutions of ECSQ are proposed, i.e., uniform quantization and an iterative solution similar to Lloyd's algorithm. Expand
Improved Lossy Image Compression with Priming and Spatially Adaptive Bit Rates for Recurrent Networks
TLDR
A method for lossy image compression based on recurrent, convolutional neural networks that outperforms BPG, WebP, JPEG2000, and JPEG as measured by MS-SSIM is proposed and it is shown that training with a pixel-wise loss weighted by SSIM increases reconstruction quality according to multiple metrics. Expand
Using very deep autoencoders for content-based image retrieval
TLDR
This work shows how to learn many layers of features on color images and how these features are used to initialize deep autoencoders, which are then used to map images to short binary codes. Expand
...
1
2
3
4
5
...