• Corpus ID: 58820035

A learning rule for asynchronous perceptrons with feedback in a combinatorial environment

@inproceedings{Almeida1990ALR,
  title={A learning rule for asynchronous perceptrons with feedback in a combinatorial environment},
  author={Lu{\'i}s B. Almeida},
  year={1990}
}

Topics from this paper

Dual-mode dynamics neural network for combinatorial optimization
TLDR
A new approach to solving combinatorial optimization problems based on a novel dynamic neural network featuring a dual-mode of network dynamics, the state dynamics and the weight dynamics, referred to here as the dual- mode dynamics neural network (D2NN).
Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation
TLDR
It is shown that multi-layer recurrently connected networks with 1, 2, and 3 hidden layers can be trained by Equilibrium Propagation on the permutation-invariant MNIST task, and it makes it more plausible that a mechanism similar to Backpropagation could be implemented by brains.
Towards a Biologically Plausible Backprop
This work contributes several new elements to the quest for a biologically plausible implementation of backprop in brains. We introduce a very general and abstract framework for machine learning, in
Biologically Plausible Error-Driven Learning Using Local Activation Differences: The Generalized Recirculation Algorithm
TLDR
All known fully general error-driven learning algorithms that use local activation-based variables in deterministic networks can be considered variations of the GeneRec algorithm (and indirectly, of the backpropagation algorithm).
Parameter optimization in models of the olfactory neural system
A Convergence Result for Learning in Recurrent Neural Networks
TLDR
It is given a rigorous analysis of the convergence properties of a backpropagation algorithm for recurrent networks containing either output or hidden layer recurrence and restrictions are offered that may help assure convergence of the network parameters to a local optimum.
Automated Reasoning
  • R. Boyer
  • Computer Science
    Automated Reasoning Series
  • 1991
TLDR
A version of the theorem prover IMPLY for proving theorems in the theory of non-standard analysis is presented, which are statements of the equivalence between the standard and non- standard definitions of concepts from analysis.
Differentiable Forward and Backward Fixed-Point Iteration Layers
TLDR
Experiments show that the fixed-point iteration (FPI) layer can be successfully applied to real-world problems such as image denoising, optical flow, and multi-label classification.
Equilibrium Propagation with Continual Weight Updates
TLDR
It is proved theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT.
On the Iteration Complexity of Hypergradient Computation
TLDR
A unified analysis is presented which allows for the first time to quantitatively compare these methods, providing explicit bounds for their iteration complexity, and suggests a hierarchy in terms of computational efficiency among the above methods.
...
1
2
3
4
5
...