• Corpus ID: 58820035

A learning rule for asynchronous perceptrons with feedback in a combinatorial environment

@inproceedings{Almeida1990ALR,
  title={A learning rule for asynchronous perceptrons with feedback in a combinatorial environment},
  author={Lu{\'i}s B. Almeida},
  year={1990}
}

Dual-mode dynamics neural network for combinatorial optimization

Supervised Models C 1 . 2 Multilayer perceptrons

This section introduces multilayer perceptrons, which are the most commonly used type of neural network. The popular backpropagation training algorithm is studied in detail. The momentum and adaptive

Neurons learn by predicting future activity

It is demonstrated that a single neuron predicts its future activity, and this predictive learning rule can be derived from a metabolic principle, whereby neurons need to minimize their own synaptic activity (cost) while maximizing their impact on local blood supply by recruiting other neurons.

Equivalence of Equilibrium Propagation and Recurrent Backpropagation

This work shows that it is not required to have a side network for the computation of error derivatives and supports the hypothesis that in biological neural networks, temporal derivatives of neural activities may code for error signals.

Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation

It is shown that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task, and it makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains.

Towards a Biologically Plausible Backprop

This work contributes several new elements to the quest for a biologically plausible implementation of backprop in brains. We introduce a very general and abstract framework for machine learning, in

The Time

This review attempts to provide an insightful per- spective on the role of time within neural network models and the use of neural networks for prob- lems involving time. The most commonly used

Convergence Result in Recurrent Neural Networks

  • Computer Science
  • 2001

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Parameter optimization in models of the olfactory neural system

...