XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

@inproceedings{Rastegari2016XNORNetIC,
  title={XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks},
  author={Mohammad Rastegari and Vicente Ordonez and Joseph Redmon and Ali Farhadi},
  booktitle={ECCV},
  year={2016}
}
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. [...] Key Result This results in 58\(\times \) faster convolutional operations (in terms of number of the high precision operations) and 32\(\times \) memory savings. XNOR-Nets offer the possibility of running state-of-the-art networks on CPUs (rather than GPUs) in real-time. Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our…Expand
Towards Accurate Binary Convolutional Neural Network
TLDR
The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations. Expand
FracBNN: Accurate and FPGA-Efficient Binary Neural Networks with Fractional Activations
TLDR
The proposed FracBNN exploits fractional activations to substantially improve the accuracy of BNNs, and implements the entire optimized network architecture on an embedded FPGA (Xilinx Ultra96 v2) with the ability of real-time image classification. Expand
Binary neural networks
TLDR
A survey on the state-of-the-art researches on the design and hardware implementation of the BNN models is conducted. Expand
Capacity Limits of Fully Binary CNN
TLDR
The aim of the paper is to provide the further exploration of the binarization effect on the model capacity and show that while for MNIST the accuracy is very close to the full precision counterpart, for the more complex dataset, CIFAR-10, thebinarization and the representational power of CNNs is strongly affected. Expand
Projection Convolutional Neural Networks for 1-bit CNNs via Discrete Back Propagation
TLDR
This paper introduces projection convolutional neural networks with a discrete back propagation via projection (DBPP) to improve the performance of binarized neural networks (BNNs) and learns a set of diverse quantized kernels that compress the full-precision kernels in a more efficient way than those proposed previously. Expand
SATB-Nets: Training Deep Neural Networks with Segmented Asymmetric Ternary and Binary Weights
TLDR
SATB-Nets, a method which trains CNNs with segmented asymmetric ternary weights for convolutional layers and binary weights for the fully-connected layers, outperforms full precision model VGG16 on CIFAR-10 and ImageNet datasets. Expand
A Convolutional Result Sharing Approach for Binarized Neural Network Inference
TLDR
The binary-weight-binary-input binarized neural network (BNN) allows a much more efficient way to implement convolutional neural networks (CNNs) on mobile platforms and the number of operations in convolution layers of BNNs can be reduced effectively. Expand
Radius Adaptive Convolutional Neural Network
TLDR
The proposed radius-adaptive convolutional neural network (RACNN) has a similar number of weights to a standard one, yet, results show it can reach higher speeds and an adaptive convolution that adopts different kernel sizes based on the content. Expand
Quantization and Training of Low Bit-Width Convolutional Neural Networks for Object Detection
TLDR
LBW-Net is presented, an efficient optimization based method for quantization and training of the low bit-width convolutional neural networks (CNNs) that is nearly lossless in the object detection tasks, and can even do better in some real world visual scenes. Expand
Real Full Binary Neural Network for Image Classification and Object Detection
TLDR
The proposed Real Full Binary Neural Network has similar performance to other BNNs in image classification and object detection, while reducing computation power and memory size, and can be efficiently implemented on CPU, FPGA and GPU. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 48 REFERENCES
Fixed point optimization of deep convolutional neural networks for object recognition
TLDR
The results indicate that quantization induces sparsity in the network which reduces the effective number of network parameters and improves generalization, and reduces the required memory storage by a factor of 1/10 and achieves better classification results than the high precision networks. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TLDR
BinaryNet, a method which trains DNNs with binary weights and activations when computing parameters’ gradient is introduced, which drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. Expand
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
TLDR
BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN. Expand
Compressing Deep Convolutional Networks using Vector Quantization
TLDR
This paper is able to achieve 16-24 times compression of the network with only 1% loss of classification accuracy using the state-of-the-art CNN, and finds in terms of compressing the most storage demanding dense connected layers, vector quantization methods have a clear gain over existing matrix factorization methods. Expand
Bitwise Neural Networks
TLDR
The proposed Bitwise Neural Network (BNN) is especially suitable for resource-constrained environments, since it replaces either floating or fixed-point arithmetic with significantly more efficient bitwise operations. Expand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
Speeding up Convolutional Neural Networks with Low Rank Expansions
TLDR
Two simple schemes for drastically speeding up convolutional neural networks are presented, achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Expand
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
TLDR
This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features. Expand
...
1
2
3
4
5
...