• Corpus ID: 244117039

Neural optimal feedback control with local learning rules

@inproceedings{Friedrich2021NeuralOF,
  title={Neural optimal feedback control with local learning rules},
  author={Johannes Friedrich and Siavash Golkar and Shiva Farashahi and Alexander Genkin and Anirvan M. Sengupta and Dmitri B. Chklovskii},
  booktitle={NeurIPS},
  year={2021}
}
A major problem in motor control is understanding how the brain plans and executes proper movements in the face of delayed and noisy stimuli. A prominent framework for addressing such control problems is Optimal Feedback Control (OFC). OFC generates control actions that optimize behaviorally relevant criteria by integrating noisy sensory stimuli and the predictions of an internal model using the Kalman filter or its extensions. However, a satisfactory neural model of Kalman filtering and… 

Figures and Tables from this paper

Kalman filters as the steady-state solution of gradient descent on variational free energy
TLDR
This work presents a straightforward derivation of Kalman filters consistent with active inference via a variational treatment of free energy minimisation in terms of gradient descent, offering a more direct link between models of neural dynamics as gradient descent and standard accounts of perception and decision making based on probabilistic inference.
Beyond accuracy: generalization properties of bio-plausible temporal credit assignment rules
TLDR
This analysis is the first to identify the reason for this generalization gap between artificial and biologically-plausible learning rules, which can help guide future investigations into how the brain learns solutions that generalize.

References

SHOWING 1-10 OF 51 REFERENCES
Optimal Sensorimotor Integration in Recurrent Cortical Networks: A Neural Implementation of Kalman Filters
TLDR
It is proposed that the neural implementation of this Kalman filter involves recurrent basis function networks with attractor dynamics, a kind of architecture that can be readily mapped onto cortical circuits.
Neural network learning of optimal Kalman prediction and control
Neural Kalman Filtering
TLDR
It is shown that a gradient-descent approximation to the Kalman filter requires only local computations with variance weighted prediction errors, and that it is possible under the same scheme to adaptively learn the dynamics model with a learning rule that corresponds directly to Hebbian plasticity.
Optimal feedback control as a theory of motor coordination
TLDR
This work shows that the optimal strategy in the face of uncertainty is to allow variability in redundant (task-irrelevant) dimensions, and proposes an alternative theory based on stochastic optimal feedback control, which emerges naturally from this framework.
Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception
TLDR
The Neural Particle Filter is proposed, a sampling-based nonlinear Bayesian filter, which can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons and holds the promise to avoid the ‘curse of dimensionality’.
The computational and neural basis of voluntary motor control and planning
  • S. Scott
  • Psychology
    Trends in Cognitive Sciences
  • 2012
The interplay between cerebellum and basal ganglia in motor adaptation: A modeling study
TLDR
This work uses mathematical modeling to simulate control of planar reaching movements that relies on both error-based and non-error-based learning mechanisms and suggests that for learning to be efficient only one of these mechanisms should be active at a time.
A Tour of Reinforcement Learning: The View from Continuous Control
  • B. Recht
  • Computer Science
    Annual Review of Control, Robotics, and Autonomous Systems
  • 2019
This article surveys reinforcement learning from the perspective of optimization and control, with a focus on continuous control applications. It reviews the general formulation, terminology, and
A Neural Implementation of the Kalman Filter
TLDR
This paper focuses on the Bayesian filtering of stochastic time series and introduces a novel neural network, derived from a line attractor architecture, whose dynamics map directly onto those of the Kalman filter in the limit of small prediction error.
Adaptive representation of dynamics during learning of a motor task
TLDR
The investigation of how the CNS learns to control movements in different dynamical conditions, and how this learned behavior is represented, suggests that the elements of the adaptive process represent dynamics of a motor task in terms of the intrinsic coordinate system of the sensors and actuators.
...
...