• Corpus ID: 26853479

Transfer Learning with Binary Neural Networks

@article{Leroux2017TransferLW,
  title={Transfer Learning with Binary Neural Networks},
  author={Sam Leroux and Steven Bohez and Tim Verbelen and Bert Vankeirsbilck and Pieter Simoens and B. Dhoedt},
  journal={ArXiv},
  year={2017},
  volume={abs/1711.10761}
}
Previous work has shown that it is possible to train deep neural networks with low precision weights and activations. In the extreme case it is even possible to constrain the network to binary values. The costly floating point multiplications are then reduced to fast logical operations. High end smart phones such as Google's Pixel 2 and Apple's iPhone X are already equipped with specialised hardware for image processing and it is very likely that other future consumer hardware will also have… 

Figures and Tables from this paper

Training and Meta-Training Binary Neural Networks with Quantum Computing

It is shown that the complete loss function landscape of a neural network can be represented as the quantum state output by a quantum computer and, further, that with minor adaptation, this method can also represent the meta-loss landscapes of a number of neural network architectures simultaneously.

A Cloud-Edge-Smart IoT Architecture for Speeding Up the Deployment of Neural Network Models with Transfer Learning Techniques

A new model deployment and update mechanism based on the share weight characteristic of transfer learning is proposed to address the model deployment issues associated with the significant number of IoT devices.

Macroscopic and Microscopic Analysis of Chinese Typical Driving Behavior from UAV View

Investigating the macroscopic and microscopic characteristics of typical Chinese driving behavior, by using UAV to capture and extract them showed that the Chinese traffic state was more stable than in Germany, however, with more aggressive behavior, compared to HighD.

HyBNN and FedHyBNN: (Federated) Hybrid Binary Neural Networks

This paper introduces a novel hybrid neural network architecture, Hybrid Binary Neural Network (HyBNN), consisting of a task-independent, general, full-precision variational autoencoder with a binary latent space and a task specific binary neural network that is able to greatly limit the accuracy loss due to input binarization by using the full precision variations as a feature extractor.

References

SHOWING 1-10 OF 16 REFERENCES

Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1

A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.

Improving the speed of neural networks on CPUs

This paper uses speech recognition as an example task, and shows that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speed up over an aggressively optimized floating-point baseline at no cost in accuracy.

Learning both Weights and Connections for Efficient Neural Network

A method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections, and prunes redundant connections using a three-step method.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

Mixed Precision Training

This work introduces a technique to train deep neural networks using half precision floating point numbers, and demonstrates that this approach works for a wide variety of models including convolution neural networks, recurrent neural networks and generative adversarial networks.

Pruning Convolutional Neural Networks for Resource Efficient Inference

It is shown that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier.

Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification

This work proposes a Parametric Rectified Linear Unit (PReLU) that generalizes the traditional rectified unit and derives a robust initialization method that particularly considers the rectifier nonlinearities.

Deep Learning

Deep learning is making major advances in solving problems that have resisted the best attempts of the artificial intelligence community for many years, and will have many more successes in the near future because it requires very little engineering by hand and can easily take advantage of increases in the amount of available computation and data.

Pruning Convolutional Neural Networks for Resource Efficient Transfer Learning

A new criterion based on an efficient first-order Taylor expansion to approximate the absolute change in training cost induced by pruning a network component is proposed, demonstrating superior performance compared to other criteria, such as the norm of kernel weights or average feature map activation.

MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications

This work introduces two simple global hyper-parameters that efficiently trade off between latency and accuracy and demonstrates the effectiveness of MobileNets across a wide range of applications and use cases including object detection, finegrain classification, face attributes and large scale geo-localization.