• Corpus ID: 246996922

LG-LSQ: Learned Gradient Linear Symmetric Quantization

@article{Lin2022LGLSQLG,
  title={LG-LSQ: Learned Gradient Linear Symmetric Quantization},
  author={Shih-Ting Lin and Zhaofang Li and Yu-Hsiang Cheng and Hao-Wen Kuo and Chih-Cheng Lu and Kea-Tiong Tang},
  journal={ArXiv},
  year={2022},
  volume={abs/2202.09009}
}
Deep neural networks with lower precision weights and operations at inference time have advantages in terms of the cost of memory space and accelerator power. The main challenge associated with the quantization algorithm is maintaining accuracy at low bit-widths. We propose learned gradient linear symmetric quantization (LG-LSQ) as a method for quantizing weights and activation functions to low bit-widths with high accuracy in integer neural network processors. First, we introduce the scaling… 

Tables from this paper

References

SHOWING 1-10 OF 31 REFERENCES

Linear Symmetric Quantization of Neural Networks for Low-precision Integer Hardware

A learned linear symmetric quantizer for integer neural network processors is proposed, which not only quantizes neural parameters and activations to low-bit integer but also accelerates hardware inference by using batch normalization fusion and low-precision accumulators and multipliers.

Learnable Companding Quantization for Accurate Low-bit Neural Networks

  • Kohei Yamamoto
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2021
Experimental results show that LCQ outperforms conventional state-of-the-art methods and narrows the gap between quantized and full-precision models for image classification and object detection tasks.

Network Quantization with Element-wise Gradient Scaling

An element-wise gradient scaling (EWGS) is proposed, a simple yet effective alternative to the straight-through estimator, training a quantized network better than the STE in terms of stability and accuracy.

LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

This work proposes to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization, to address the gap in prediction accuracy between the quantized model and the full-precision model.

Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks

Differentiable Soft Quantization (DSQ) is proposed to bridge the gap between the full-precision and low-bit networks and can help pursue the accurate gradients in backward propagation, and reduce the quantization loss in forward process with an appropriate clipping range.

Learned Step Size Quantization

This work introduces a novel means to estimate and scale the task loss gradient at each weight and activation layer's quantizer step size, such that it can be learned in conjunction with other network parameters.

Quantizing deep convolutional networks for efficient inference: A whitepaper

An overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations is presented and it is recommended that per-channel quantization of weights and per-layer quantized of activations be the preferred quantization scheme for hardware acceleration and kernel optimization.

LSQ+: Improving low-bit quantization through learnable offsets and better initialization

LSQ+ is the first work to quantize such architectures to extremely low bit-widths and shows state-of-the-art results for EfficientNet and MixNet and also significantly outperforms LSQ for low-bit quantization of neural nets with Swish activations.

Weighted-Entropy-Based Quantization for Deep Neural Networks

This paper proposes a novel method for quantizing weights and activations based on the concept of weighted entropy, which achieves significant reductions in both the model size and the amount of computation with minimal accuracy loss.

Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding

This work introduces "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy.