• Corpus ID: 238743860

A comprehensive review of Binary Neural Network

  title={A comprehensive review of Binary Neural Network},
  author={Chunyu Yuan and Sos S. Agaian},
Binary Neural Network (BNN) method is an extreme application of convolutional neural network (CNN) parameter quantization. As opposed to the original CNN methods which employed floating-point computation with full-precision weights and activations, BBN uses 1-bit activations and weights. With BBNs, a significant amount of storage, network complexity, and energy consumption can be reduced, and neural networks can be implemented more efficiently in embedded applications. Unfortunately… 
PokeBNN: A Binary Pursuit of Lightweight Accuracy
PokeConv is proposed, a binary convolution block which improves quality of BNNs by techniques such as adding multiple residual paths, and tuning the activation function, and it is established a new, strong state-of-the-art SOTA on top-1 accuracy together with commonly-used CPU64 cost, ACE cost and network size metrics.
S2NN: Time Step Reduction of Spiking Surrogate Gradients for Training Energy Efficient Single-Step Neural Networks
It is experimentally demonstrated that the obtained neuron model enables SNN to train more accurately and energy-efficiently than existing neuron models for SNNs and BNNs, and it was shown that the proposed SNN could achieve comparable accuracy to full-precision networks while being highly energy- efficient.
A Brain-Inspired Low-Dimensional Computing Classifier for Inference on Tiny Devices
By mapping the existing brain-inspired HDC classifier into an equivalent neural network, this work can improve the inference accuracy while successfully reducing the ultra-high dimension of existing HDC models by orders of magnitude.
0/1 Deep Neural Networks via Block Coordinate Descent
The step function is one of the simplest and most natural activation functions for deep neural networks (DNNs). As it counts 1 for positive variables and 0 for others, its intrinsic characteristics


FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations
The proposed FracBNN exploits fractional activations to substantially improve the accuracy of BNNs, and implements the entire optimized network architecture on an embedded FPGA (Xilinx Ultra96 v2) with the ability of real-time image classification.
Towards Accurate Binary Convolutional Neural Network
The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.
Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation
This paper introduces projection convolutional neural networks with a discrete back propagation via projection (DBPP) to improve the performance of binarized neural networks (BNNs) and learns a set of diverse quantized kernels that compress the full-precision kernels in a more efficient way than those proposed previously.
Sub-bit Neural Networks: Learning to Compress and Accelerate Binary Neural Networks
Sub-bit Neural Networks are introduced, a new type of binary quantization design tailored to compress and accelerate BNNs that is inspired by an empirical observation, showing that binary kernels learnt at convolutional layers of a BNN model are likely to be distributed over kernel subsets.
Bayesian Optimized 1-Bit CNNs
This paper proposes a novel approach, called Bayesian optimized 1-bit CNNs (denoted as BONNs), taking the advantage of Bayesian learning, a well-established strategy for hard problems, to significantly improve the performance of extreme 1- bit CNNs.
Balanced Binary Neural Networks with Gated Residual
This paper attempts to maintain the information propagated in the forward process and proposes a Balanced Binary Neural Networks with Gated Residual (BBG for short), a weight balanced binarization is introduced and thus the informative binary weights can capture more information contained in the activations.
Bi-Real Net: Enhancing the Performance of 1-bit CNNs With Improved Representational Capability and Advanced Training Algorithm
A novel model, dubbed Bi-Real net, which connects the real activations (after the 1-bit convolution and/or BatchNorm layer, before the sign function) to activations of the consecutive block, through an identity shortcut is proposed, which achieves up to 10% higher top-1 accuracy with more memory saving and lower computational cost.
Structured Binary Neural Networks for Image Recognition
This work proposes a "network decomposition" strategy, termed Group-Net, in which each full-precision group can be effectively reconstructed by aggregating a set of homogeneous binary branches, and extends Group- net for accurate semantic segmentation by embedding rich context into the binary structure.
NASB: Neural Architecture Search for Binary Convolutional Neural Networks
A strategy, named NASB, which adapts Neural Architecture Search (NAS) to find an optimized architecture for the binarization of CNNs, achieving a better trade-off between the accuracy and computational complexity compared to hand-optimized binary CNNs.
A fully connected layer elimination for a binarizec convolutional neural network on an FPGA
A binarized CNN which treats only binary 2-values for the inputs and the weights is used, which can realize a compact and faster CNN than the conventional ones.