Combinatorial optimization for low bit-width neural networks

@article{Zhou2022CombinatorialOF,
  title={Combinatorial optimization for low bit-width neural networks},
  author={Hanxu Zhou and Aida Ashrafi and Matthew B. Blaschko},
  journal={2022 26th International Conference on Pattern Recognition (ICPR)},
  year={2022},
  pages={2246-2252}
}
Low-bit width neural networks have been extensively explored for deployment on edge devices to reduce computational resources. Existing approaches have focused on gradient-based optimization in a two-stage train-and-compress setting or as a combined optimization where gradients are quantized during training. Such schemes require high-performance hardware during the training phase and usually store an equivalent number of full-precision weights apart from the quantized weights. In this paper, we… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 23 REFERENCES

BinaryConnect: Training Deep Neural Networks with binary weights during propagations

BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.

Greedy Layer-Wise Training of Deep Networks

These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

The Binary-Weight-Network version of AlexNet is compared with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than \(16\,\%\) in top-1 accuracy.

Binary Neural Networks: A Survey

Deep Learning with Low Precision by Half-Wave Gaussian Quantization

An half-wave Gaussian quantizer (HWGQ) is proposed for forward approximation and shown to have efficient implementation, by exploiting the statistics of of network activations and batch normalization operations, and to achieve much closer performance to full precision networks than previously available low-precision networks.

Neural Networks with Few Multiplications

Experimental results show that this approach to training that eliminates the need for floating point multiplications can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks.

Regularizing Activation Distribution for Training Binarized Deep Networks

The experiments show that the distribution loss can consistently improve the accuracy of BNNs without losing their energy benefits and equipped with the proposed regularization, BNN training is shown to be robust to the selection of hyper-parameters including optimizer and learning rate.

Ternary Weight Networks

TWNs are introduced - neural networks with weights constrained to +1, 0 and -1, which have stronger expressive abilities than the recently proposed binary precision counterparts and are thus more effective than the latter.

A Convolutional Result Sharing Approach for Binarized Neural Network Inference

The binary-weight-binary-input binarized neural network (BNN) allows a much more efficient way to implement convolutional neural networks (CNNs) on mobile platforms and the number of operations in convolution layers of BNNs can be reduced effectively.

Greedy Layerwise Learning Can Scale to ImageNet

This work uses 1-hidden layer learning problems to sequentially build deep networks layer by layer, which can inherit properties from shallow networks, and obtains an 11-layer network that exceeds several members of the VGG model family on ImageNet, and can train a VGG-11 model to the same accuracy as end-to-end learning.