# The least-control principle for learning at equilibrium

@article{Meulemans2022TheLP, title={The least-control principle for learning at equilibrium}, author={Alexander Meulemans and Nicolas Zucchet and Seijin Kobayashi and Johannes von Oswald and Jo{\~a}o Sacramento}, journal={ArXiv}, year={2022}, volume={abs/2207.01332} }

Equilibrium systems are a powerful way to express neural computations. As special cases, they include models of great current interest in both neuroscience and machine learning, such as equilibrium recurrent neural networks, deep equilibrium models, or meta-learning. Here, we present a new principle for learning such systems with a temporallyand spatially-local rule. Our principle casts learning as a least-control problem, where we first introduce an optimal controller to lead the system…

## References

SHOWING 1-10 OF 107 REFERENCES

### Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network

- Computer ScienceeLife
- 2017

A supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons that shows that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.

### Credit Assignment in Neural Networks through Deep Feedback Control

- Computer ScienceNeurIPS
- 2021

Deep Feedback Control is introduced, a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment, and which approximates GaussNewton optimization for a wide range of feedback connectivity patterns.

### Deep Equilibrium Models

- Computer ScienceNeurIPS
- 2019

It is shown that DEQs often improve performance over these state-of-the-art models (for similar parameter counts); have similar computational requirements to existing models; and vastly reduce memory consumption (often the bottleneck for training large sequence models), demonstrating an up-to 88% memory reduction in the authors' experiments.

### A deep learning theory for neural networks grounded in physics

- Computer ScienceArXiv
- 2021

It is argued that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them, and an alternative mathematical framework is presented, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics.

### A Theoretical Framework for Target Propagation

- Computer ScienceNeurIPS
- 2020

This work analyzes target propagation (TP), a popular but not yet fully understood alternative to BP, from the standpoint of mathematical optimization and shows that TP is closely related to Gauss-Newton optimization and thus substantially differs from BP.

### Minimizing Control for Credit Assignment with Strong Feedback

- Computer ScienceICML
- 2022

This work presents a fundamentally novel view of learning as control minimization, while sidestep-ping biologically unrealistic assumptions in gradient-based learning for deep neural networks.

### A deep learning framework for neuroscience

- Computer ScienceNature Neuroscience
- 2019

It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.

### Biological credit assignment through dynamic inversion of feedforward networks

- Computer ScienceNeurIPS
- 2020

This work shows that feedforward network transformations can be effectively inverted through dynamics, and derives this dynamic inversion from the perspective of feedback control, where the forward transformation is reused and dynamically interacts with fixed or random feedback to propagate error signals during the backward pass.

### Stable and expressive recurrent vision models

- Computer ScienceNeurIPS
- 2020

It is demonstrated that recurrent vision models trained with C-RBP can detect long-range spatial dependencies in a synthetic contour tracing task that BPTT-trained models cannot and outperform the leading feedforward approach.