• Publications
  • Influence
BinaryConnect: Training Deep Neural Networks with binary weights during propagations
TLDR
BinaryConnect is introduced, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated, and near state-of-the-art results with BinaryConnect are obtained on the permutation-invariant MNIST, CIFAR-10 and SVHN. Expand
Binarized Neural Networks: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. Expand
Binarized Neural Networks
TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. Expand
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
TLDR
A binary matrix multiplication GPU kernel is programmed with which it is possible to run the MNIST QNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy. Expand
BinaryNet: Training Deep Neural Networks with Weights and Activations Constrained to +1 or -1
TLDR
BinaryNet, a method which trains DNNs with binary weights and activations when computing parameters’ gradient is introduced, which drastically reduces memory usage and replaces most multiplications by 1-bit exclusive-not-or (XNOR) operations, which might have a big impact on both general-purpose and dedicated Deep Learning hardware. Expand
Training deep neural networks with low precision multiplications
TLDR
It is found that very low precision is sufficient not just for running trained networks but also for training them, and it is possible to train Maxout networks with 10 bits multiplications. Expand
Neural Networks with Few Multiplications
TLDR
Experimental results show that this approach to training that eliminates the need for floating point multiplications can result in even better performance than standard stochastic gradient descent training, paving the way to fast, hardware-friendly training of neural networks. Expand
Low precision arithmetic for deep learning
TLDR
It is found that very low precision computation is sufficient not just for running trained networks but also for training them. Expand
BNN+: Improved Binary Network Training
TLDR
An improved binary training method is proposed, by introducing a new regularization function that encourages training weights around binary values and introducing an improved approximation of the derivative of the $sign$ activation function in the backward computation. Expand
Low precision storage for deep learning
TLDR
It is found that very low precision storage is sufficient not just for running trained networks but also for training them. Expand
...
1
2
...