Learning in the Machine: Random Backpropagation and the Learning Channel
@article{Baldi2016LearningIT, title={Learning in the Machine: Random Backpropagation and the Learning Channel}, author={P. Baldi and Peter Sadowski and Z. Lu}, journal={ArXiv}, year={2016}, volume={abs/1612.02734} }
Abstract: Random backpropagation (RBP) is a variant of the backpropagation algorithm for training neural networks, where the transpose of the forward matrices are replaced by fixed random matrices in the calculation of the weight updates. It is remarkable both because of its effectiveness, in spite of using random matrices to communicate error information, and because it completely removes the taxing requirement of maintaining symmetric weights in a physical neural system. To better understand… Expand
Figures and Topics from this paper
Paper Mentions
15 Citations
Event-Driven Random Back-Propagation: Enabling Neuromorphic Deep Learning Machines
- Computer Science, Medicine
- Front. Neurosci.
- 2017
- 132
- Highly Influenced
- PDF
Deep Supervised Learning Using Local Errors
- Computer Science, Mathematics
- Front. Neurosci.
- 2018
- 38
- Highly Influenced
- PDF
Event-driven random backpropagation: Enabling neuromorphic deep learning machines
- Computer Science
- 2017 IEEE International Symposium on Circuits and Systems (ISCAS)
- 2017
- 46
- Highly Influenced
- PDF
A Learning Framework for Winner-Take-All Networks with Stochastic Synapses
- Computer Science, Medicine
- Neural Computation
- 2018
- 8
- PDF
Understanding Synthetic Gradients and Decoupled Neural Interfaces
- Computer Science, Mathematics
- ICML
- 2017
- 35
- Highly Influenced
- PDF
Biologically plausible deep learning - but how far can we go with shallow networks?
- Computer Science, Mathematics
- Neural Networks
- 2019
- 28
- PDF
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks
- Computer Science, Biology
- Neural Computation
- 2018
- 165
- Highly Influenced
- PDF
Neural and Synaptic Array Transceiver: A Brain-Inspired Computing Framework for Embedded Learning
- Computer Science, Medicine
- Front. Neurosci.
- 2018
- 18
- PDF
References
SHOWING 1-10 OF 27 REFERENCES
A theory of local learning, the learning channel, and the optimality of backpropagation
- Mathematics, Medicine
- Neural Networks
- 2016
- 51
- PDF
Random feedback weights support learning in deep neural networks
- Computer Science, Biology
- ArXiv
- 2014
- 115
- PDF
Understanding the difficulty of training deep feedforward neural networks
- Computer Science, Mathematics
- AISTATS
- 2010
- 9,685
- Highly Influential
- PDF
Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations
- Computer Science
- J. Mach. Learn. Res.
- 2017
- 991
- PDF
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
- Computer Science
- IEEE International Conference on Neural Networks
- 1993
- 4,238
- PDF
Neural networks and principal component analysis: Learning from examples without local minima
- Mathematics, Computer Science
- Neural Networks
- 1989
- 1,198
Deep Compression: Compressing Deep Neural Network with Pruning, Trained Quantization and Huffman Coding
- Computer Science
- ICLR
- 2016
- 4,330
- PDF