• Corpus ID: 246285633

Post-training Quantization for Neural Networks with Provable Guarantees

@article{Zhang2022PosttrainingQF,
  title={Post-training Quantization for Neural Networks with Provable Guarantees},
  author={Jinjie Zhang and Yixuan Zhou and Rayan Saab},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.11113}
}
. While neural networks have been remarkably successful in a wide array of applications, implementing them in resource-constrained hardware remains an area of intense research. By replacing the weights of a neural network with quantized (e.g., 4-bit, or binary) counterparts, massive savings in computation cost, memory, and power consumption are attained. To that end, we generalize a post-training neural-network quantization method, GPFQ, that is based on a greedy path-following mechanism. Among… 

Figures and Tables from this paper

A simple approach for quantizing neural networks

A simple deterministic pre-processing step allows us to quantize network layers via memoryless scalar quantization while preserving the network performance on given training data.

Relaxed quantization and Binarization for Neural Networks

This thesis aims to compare and improve methods for training QNNs, so the gap between quantized and full-precision models closes, and proposes simplifications to samplingbased methods and suggests that probabilistic propagation can be used for pretraining.

CEG4N: Counter-Example Guided Neural Network Quantization Refinement

This work proposes Counter-Example Guided Neural Network Quantization Refinement (CEG4N), a technique that combines search-based quantization and equivalence verification that minimizes the computational requirements, while the latter guarantees that the network’s output does not change after quantization.

References

SHOWING 1-10 OF 32 REFERENCES

Low-bit Quantization of Neural Networks for Efficient Inference

This paper formalizes the linear quantization task as a Minimum Mean Squared Error (MMSE) problem for both weights and activations, allowing low-bit precision inference without the need for full network retraining.

Post-training Piecewise Linear Quantization for Deep Neural Networks

A piecewise linear quantization (PWLQ) scheme to enable accurate approximation for tensor values that have bell-shaped distributions with long tails is proposed, which achieves superior performance on image classification, semantic segmentation, and object detection with minor overhead.

Post training 4-bit quantization of convolutional networks for rapid-deployment

This paper introduces the first practical 4-bit post training quantization approach: it does not involve training the quantized model (fine-tuning), nor it requires the availability of the full dataset, and achieves accuracy that is just a few percents less the state-of-the-art baseline across a wide range of convolutional models.

Quantizing deep convolutional networks for efficient inference: A whitepaper

An overview of techniques for quantizing convolutional neural networks for inference with integer weights and activations is presented and it is recommended that per-channel quantization of weights and per-layer quantized of activations be the preferred quantization scheme for hardware acceleration and kernel optimization.

LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks

This work proposes to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization, to address the gap in prediction accuracy between the quantized model and the full-precision model.

A Greedy Algorithm for Quantizing Neural Networks

We propose a new computationally efficient method for quantizing the weights of pre-trained neural networks that is general enough to handle both multi-layer perceptrons and convolutional neural

Data-Free Quantization Through Weight Equalization and Bias Correction

We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer

Up or Down? Adaptive Rounding for Post-Training Quantization

AdaRound is proposed, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss that outperforms rounding-to-nearest by a significant margin and establishes a new state-of-the-art forPost- training quantization on several networks and tasks.

Improving Post Training Neural Quantization: Layer-wise Calibration and Integer Programming

Two pipelines are introduced, advanced and light, where the former involves minimizing the quantization errors of each layer by optimizing its parameters over the calibration set and using integer programming to optimally allocate the desired bit-width for each layer while constraining accuracy degradation or model compression.

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.