Corpus ID: 8007850

Random feedback weights support learning in deep neural networks

@article{Lillicrap2014RandomFW,
  title={Random feedback weights support learning in deep neural networks},
  author={T. Lillicrap and D. Cownden and D. Tweed and C. Akerman},
  journal={ArXiv},
  year={2014},
  volume={abs/1411.0247}
}
The brain processes information through many layers of neurons. This deep architecture is representationally powerful, but it complicates learning by making it hard to identify the responsible neurons when a mistake is made. In machine learning, the backpropagation algorithm assigns blame to a neuron by computing exactly how it contributed to an error. To do this, it multiplies error signals by matrices consisting of all the synaptic weights on the neuron's axon and farther downstream. This… Expand
Random synaptic feedback weights support error backpropagation for deep learning
TLDR
A surprisingly simple mechanism that assigns blame by multiplying errors by even random synaptic weights is presented, which can transmit teaching signals across multiple layers of neurons and performs as effectively as backpropagation on a variety of tasks. Expand
Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights
TLDR
These problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Expand
Unsupervised learning by competing hidden units
TLDR
A learning algorithm is designed that utilizes global inhibition in the hidden layer and is capable of learning early feature detectors in a completely unsupervised way, and which is motivated by Hebb’s idea that change of the synapse strength should be local. Expand
Backpropagation and the brain
TLDR
It is argued that the key principles underlying backprop may indeed have a role in brain function and induce neural activities whose differences can be used to locally approximate these signals and hence drive effective learning in deep networks in the brain. Expand
Continual Learning with Deep Artificial Neurons
TLDR
Deep Artificial Neurons (DANs) are introduced, and it is shown that a suitable neuronal phenotype can endow a single network with an innate ability to update its synapses with minimal forgetting, using standard backpropagation, without experience replay, nor separate wake/sleep phases. Expand
Training the Hopfield Neural Network for Classification Using a STDP-Like Rule
TLDR
It is shown that the well-known Hopfield neural network (HNN) can be trained in a biologically plausible way and several HNNs with one or two hidden layers are trained on the MNIST dataset and all of them converge to low training errors. Expand
Learning to solve the credit assignment problem
TLDR
A hybrid learning approach that learns to approximate the gradient, and can match or the performance of exact gradient-based learning in both feedforward and convolutional networks. Expand
Biologically feasible deep learning with segregated dendrites
TLDR
A spiking, continuous-time neural network model that learns to categorize images from the MNIST data-set with local synaptic weight updates and demonstrates that deep learning can be achieved within a biologically feasible framework using segregated dendritic compartments. Expand
Training a Network of Spiking Neurons with Equilibrium Propagation
TLDR
It is shown that with appropriate step-size annealing, the Equilibrium Propagation model can converge to the same fixed-point as a real-valued neural network, and that with predictive coding, it can make this convergence much faster. Expand
Training a Spiking Neural Network with Equilibrium Propagation
TLDR
It is shown that with appropriate step-size annealing, the Equilibrium Propagation model can converge to the same fixed-point as a real-valued neural network, and that with predictive coding, it can make this convergence much faster. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 34 REFERENCES
A Fast Learning Algorithm for Deep Belief Nets
TLDR
A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. Expand
Learning in Spiking Neural Networks by Reinforcement of Stochastic Synaptic Transmission
TLDR
The hypothesis that the randomness of synaptic transmission is harnessed by the brain for learning, in analogy to the way that genetic mutation is utilized by Darwinian evolution is considered. Expand
Supervised and Unsupervised Learning with Two Sites of Synaptic Integration
TLDR
Compared to standard, one-integration-site neurons, it is possible to incorporate interesting properties in neural networks that are inspired by physiology with a modest increase of complexity, thanks to recent research on the properties of cortical pyramidal neurons. Expand
Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
TLDR
A special case in which they are identical: a multilayer perceptron with linear output units, to which weak feedback connections have been added suggests that the functionality of backpropagation can be realized alternatively by a Hebbian-type learning algorithm, which is suitable for implementation in biological networks. Expand
Backpropagation without weight transport
  • J. Kolen, J. Pollack
  • Computer Science
  • Proceedings of 1994 IEEE International Conference on Neural Networks (ICNN'94)
  • 1994
TLDR
The feasibility of an architecture equivalent to backpropagation, but without the assumption of weight transport is formally and empirically demonstrated. Expand
A more biologically plausible learning rule for neural networks.
TLDR
A more biologically plausible learning rule is described, using reinforcement learning, which is applied to the problem of how area 7a in the posterior parietal cortex of monkeys might represent visual space in head-centered coordinates and shows that a neural network does not require back propagation to acquire biologically interesting properties. Expand
Learning representations by back-propagating errors
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain. Expand
Learning Representations by Recirculation
TLDR
Simulations in simple networks show that the learning procedure usually converges rapidly on a good set of codes, and analysis shows that in certain restricted cases it performs gradient descent in the squared reconstruction error. Expand
Is backpropagation biologically plausible?
  • D. Stork
  • Computer Science
  • International 1989 Joint Conference on Neural Networks
  • 1989
TLDR
The authors finds that in several posited implementations these design considerations imply that a finely structured neural connectivity is needed as well as a number of neurons and synapses beyond those inferred from the algorithmic network presentations of backpropagation. Expand
Simple Statistical Gradient-Following Algorithms for Connectionist Reinforcement Learning
This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shownExpand
...
1
2
3
4
...