BCNN: Binary Complex Neural Network

@article{Li2021BCNNBC,
  title={BCNN: Binary Complex Neural Network},
  author={Yanfei Li and Tong Geng and Ang Li and Huimin Yu},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.10044}
}

Figures and Tables from this paper

Binary Complex Neural Network Acceleration on FPGA
TLDR
A structural pruning based accelerator of BCNN, which is able to provide more than 5000 frames/s inference throughput on edge devices and a novel 2D convolution operation accelerator for the binary complex neural network.

References

SHOWING 1-10 OF 65 REFERENCES
BNN+: Improved Binary Network Training
TLDR
An improved binary training method is proposed, by introducing a new regularization function that encourages training weights around binary values and introducing an improved approximation of the derivative of the $sign$ activation function in the backward computation.
MeliusNet: An Improved Network Architecture for Binary Neural Networks
TLDR
Experiments on the ImageNet dataset demonstrate the superior performance of the MeliusNet over a variety of popular binary architectures with regards to both computation savings and accuracy, and BNN models trained with the method can match the accuracy of the popular compact network MobileNet-v1 in terms of model size and number of operations.
BinaryDenseNet: Developing an Architecture for Binary Neural Networks
TLDR
This work develops a novel BNN architecture BinaryDenseNet, which is the first architecture specifically created for BNNs to the best of the knowledge and shows the competitiveness of the BinaryD denseNet regarding memory requirements and computational complexity.
Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Binarized Neural Networks
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.
Towards Accurate Binary Convolutional Neural Network
TLDR
The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
TLDR
BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN.
A Review of Binarized Neural Networks
TLDR
BNNs are deep neural networks that use binary values for activations and weights, instead of full precision values, which reduces execution time and is good candidates for deep learning implementations on FPGAs and ASICs due to their bitwise efficiency.
Deep Complex Networks
TLDR
This work relies on complex convolutions and present algorithms for complex batch-normalization, complex weight initialization strategies for complex-valued neural nets and uses them in experiments with end-to-end training schemes and demonstrates that such complex- valued models are competitive with their real-valued counterparts.
Binary Ensemble Neural Network: More Bits per Network or More Networks per Bit?
  • Shilin Zhu, Xin Dong, Hao Su
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
The Binary Ensemble Neural Network (BENN) is proposed, which leverages ensemble methods to improve the performance of BNNs with limited efficiency cost and can even surpass the accuracy of the full-precision floating number network with the same architecture.
...
...