• Corpus ID: 59600020

Self-Binarizing Networks

@article{Lahoud2019SelfBinarizingN,
  title={Self-Binarizing Networks},
  author={Fayez Lahoud and Radhakrishna Achanta and Pablo M{\'a}rquez-Neila and Sabine S{\"u}sstrunk},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.00730}
}
We present a method to train self-binarizing neural networks, that is, networks that evolve their weights and activations during training to become binary. To obtain similar binary networks, existing methods rely on the sign activation function. This function, however, has no gradients for non-zero values, which makes standard backpropagation impossible. To circumvent the difficulty of training a network relying on the sign activation function, these methods alternate between floating-point and… 

Figures and Tables from this paper

A Bop and Beyond: A Second Order Optimizer for Binarized Neural Networks
TLDR
This paper presents two versions of the proposed optimizer: a biased one and a bias-corrected one, each with its own applications, and presents a complete ablation study of the hyperparameters space, as well as the effect of using schedulers on each of them.
Training Binarized Neural Networks Using MIP and CP
TLDR
The experimental results on the MNIST digit recognition dataset suggest that—when training data is limited—the BNNs found by the model-based approach generalize better than those obtained from a state-of-the-art gradient descent method.
QuantNet: Learning to Quantize by Learning within Fully Differentiable Framework
TLDR
A meta-based quantizer named QuantNet is proposed, which utilizes a differentiable sub-network to directly binarize the full-precision weights without resorting to STE and any learnable gradient estimators.
Forward and Backward Information Retention for Accurate Binary Neural Networks
TLDR
The proposed Information Retention Network (IR-Net) is the first to investigate both forward and backward processes of binary networks from the unified information perspective, which provides new insight into the mechanism of network binarization.
Training Progressively Binarizing Deep Networks using FPGAs
TLDR
This paper proposes a hardware-friendly training method that progressively binarizes a singular set of fixed-point network parameters, yielding notable reductions in power and resource utilizations.
Distribution-sensitive Information Retention for Accurate Binary Neural Network
TLDR
The DIR-Net investigates both forward and backward processes of BNNs from the unified information perspective, thereby provides new insight into the mechanism of network binarization.
FleXOR: Trainable Fractional Quantization
TLDR
This paper proposes an encryption algorithm/architecture to compress quantized weights so as to achieve fractional numbers of bits per weight and shows that inserting XOR gates learns quantization/encrypted bit decisions through training and obtains high accuracy even for fractional sub 1-bit weights.
BSTC: a novel binarized-soft-tensor-core design for accelerating bit-based approximated neural nets
TLDR
Experiments show that the Singular-Binarized-Neural-Network (SBNN) design can achieve over 1000X speedup for raw inference latency over the state-of-the-art full-precision BNN inference for AlexNet on GPUs.
Accelerating Binarized Neural Networks via Bit-Tensor-Cores in Turing GPUs
  • Ang Li, Simon Su
  • Computer Science
    IEEE Transactions on Parallel and Distributed Systems
  • 2021
TLDR
It is shown that the stride of memory access can significantly affect performance delivery and a data-format co-design is highly desired to support the Tensorcore-accelerated BNN design for achieving superior performance than existing software solutions without tensorcores.
...
...

References

SHOWING 1-10 OF 37 REFERENCES
Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Loss-aware Binarization of Deep Networks
TLDR
This paper proposes a proximal Newton algorithm with diagonal Hessian approximation that directly minimizes the loss w.r.t. the binarized weights in deep neural network models.
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
TLDR
BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Towards Accurate Binary Convolutional Neural Network
TLDR
The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
TLDR
A binary matrix multiplication GPU kernel is programmed with which it is possible to run the MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
...
...