Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks

@article{Hoang2020DirectQF,
  title={Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks},
  author={Tuan Hoang and Thanh-Toan Do and Tam V. Nguyen and Ngai-Man Cheung},
  journal={ArXiv},
  year={2020},
  volume={abs/2012.13762}
}
This paper proposes two novel techniques to train deep convolutional neural networks with low bit-width weights and activations. First, to obtain low bit-width weights, most existing methods obtain the quantized weights by performing quantization on the full-precision network weights. However, this approach would result in some mismatch: the gradient descent updates full-precision weights, but it does not update the quantized weights. To address this issue, we propose a novel method that… 

Figures and Tables from this paper

Vertical Layering of Quantized Neural Networks for Heterogeneous Inference

Experiments show that the proposed vertical-layered representation and developed once QAT scheme are effective in embodying multiple quantized networks into a single one and allow one-time training, and it delivers comparable performance as that of quantized models tailored to any specific bit-width.

Collaborative Multi-Teacher Knowledge Distillation for Learning Low Bit-width Deep Neural Networks

This paper proposes a novel framework that leverages both multi-teacher knowledge distillation and network quantization for learning low bit-width DNNs and shows that the compact quantized student models trained with the proposed method achieve competitive results compared to other state-of-the-art methods, and in some cases, surpass the full precision models.

References

SHOWING 1-10 OF 32 REFERENCES

Towards Effective Low-Bitwidth Convolutional Neural Networks

This paper tackles the problem of training a deep convolutional neural network with both low-precision weights and low-bitwidth activations by proposing a two-stage optimization strategy to progressively find good local minima and adopting a novel learning scheme to jointly train a full- Precision model alongside the low-Precision one.

Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks

Differentiable Soft Quantization (DSQ) is proposed to bridge the gap between the full-precision and low-bit networks and can help pursue the accurate gradients in backward propagation, and reduce the quantization loss in forward process with an appropriate clipping range.

LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

This work proposes to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization, to address the gap in prediction accuracy between the quantized model and the full-precision model.

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

A binary matrix multiplication GPU kernel is programmed with which it is possible to run the MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.

Trained Ternary Quantization

This work proposes Trained Ternary Quantization (TTQ), a method that can reduce the precision of weights in neural networks to ternary values to improve the accuracy of some models (32, 44, 56-layer ResNet) on CIFAR-10 and AlexNet on ImageNet.

Deep Learning with Low Precision by Half-Wave Gaussian Quantization

An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations, and to achieve much closer performance to full precision networks than previously available low-precision networks.

DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients

DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bit width parameter gradients, is proposed and can achieve comparable prediction accuracy as 32-bit counterparts.

Loss-aware Weight Quantization of Deep Networks

Experiments on feedforward and recurrent neural networks show that the proposed scheme outperforms state-of-the-art weight quantization algorithms, and is as accurate (or even more accurate) than the full-precision network.

How to Train a Compact Binary Neural Network with High Accuracy?

The findings first reveal that a low learning rate is highly preferred to avoid frequent sign changes of the weights, which often makes the learning of BinaryNets unstable, and a regularization term is introduced that encourages the weights to be bipolar.

Structured Binary Neural Networks for Accurate Image Classification and Semantic Segmentation

This paper proposes a ``network decomposition'' strategy, named Group-Net, in which each full-precision group can be effectively reconstructed by aggregating a set of homogeneous binary branches, and shows strong generalization to other tasks.