Demonstration of Decentralized Physics-Driven Learning
@article{Dillavou2021DemonstrationOD, title={Demonstration of Decentralized Physics-Driven Learning}, author={Sam Dillavou and Menachem Stern and Andrea J. Liu and Douglas J. Durian}, journal={Physical Review Applied}, year={2021} }
In typical artificial neural networks, neurons adjust according to global calculations of a central processor, but in the brain neurons and synapses self-adjust based on local information. Contrastive learning algorithms have recently been proposed to train physical systems, such as fluidic, mechanical, or electrical networks, to perform machine learning tasks from local evolution rules. However, to date such systems have only been implemented in silico due to the engineering challenge of…
15 Citations
Learning Without a Global Clock: Asynchronous Learning in a Physics-Driven Learning Network
- Computer ScienceArXiv
- 2022
It is shown that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation, and actually improves performance by allowing the system to better explore the discretized state space of solutions.
Agnostic Physics-Driven Deep Learning
- Computer ScienceArXiv
- 2022
This work establishes that a physical system can perform statistical learning without gradient computations, via an Agnostic Equilibrium Propagation (Æqprop) procedure that combines energy…
Learning by non-interfering feedback chemical signaling in physical networks
- Computer ScienceArXiv
- 2022
This work proposes a new learning algorithm rooted in chemical signaling that does not require storage of two different states and proves that the algorithm performs gradient descent.
Deep physical neural networks trained with backpropagation
- Computer ScienceNature
- 2022
A hybrid in situ–in silico algorithm that applies backpropagation is used to train layers of controllable physical systems to carry out calculations like deep neural networks, but accounting for real-world noise and imperfections.
Out of equilibrium learning dynamics in physical allosteric resistor networks
- Physics
- 2021
Physical networks can learn desirable functions using local learning rules in space and time. Real learning systems, like natural neural networks, can learn out of equilibrium, on timescales…
Model architecture can transform catastrophic forgetting into positive transfer
- Computer ScienceScientific Reports
- 2022
This work introduces a neural network that is able to learn an algorithm and emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and improves its predictive power on unseen pairs of numbers as training progresses.
Network architecture determines vein fate during spontaneous reorganization, with a time delay
- Computer Science
- 2021
Network-wide vein dynamics and shear during spontaneous reorganization in the prototypical vascular networks of Physarum polycephalum are resolved and a model for vascular adaptation is derived, based on force balance at the vein walls, which reproduce the diversity of experimentally observed vein dynamics, and confirms the role of network architecture.
Using binary-stiffness beams within mechanical neural-network metamaterials to learn
- EngineeringSmart Materials and Structures
- 2023
This work introduces the concept of applying binary-stiffness beams within a lattice to achieve a mechanical neural-network (MNN) metamaterial that learns its behaviors and properties with prolonged…
Photonic online learning: a perspective
- Computer ScienceNanophotonics
- 2023
It is argued that some form of online learning will be necessary if photonic neuromorphic hardware is to achieve its true potential, and the online learning paradigm is examined.
Vein fate determined by flow-based but time-delayed integration of network architecture
- EngineeringbioRxiv
- 2023
Veins in vascular networks, such as in blood vasculature or leaf networks, continuously reorganize, grow or shrink, to minimize energy dissipation. Flow shear stress on vein walls has been set forth…
References
SHOWING 1-10 OF 62 REFERENCES
A deep learning theory for neural networks grounded in physics
- Computer ScienceArXiv
- 2021
It is argued that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them, and an alternative mathematical framework is presented, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics.
Reinforcement learning with analogue memristor arrays
- Computer ScienceNature Electronics
- 2019
An experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for the authors' hybrid analogue–digital platform, which has the potential to achieve a significant boost in speed and energy efficiency.
Supervised Learning in Physical Networks: From Machine Learning to Learning Machines
- Biology
- 2020
By applying and adapting advances of statistical learning theory to the physical world, the plausibility of new classes of smart metamaterials capable of adapting to users' needs in-situ is demonstrated.
EqSpike: spike-driven equilibrium propagation for neuromorphic implementations
- Computer ScienceiScience
- 2021
Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity
- Computer Science, BiologyScientific Reports
- 2017
Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 × 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks.
A deep learning framework for neuroscience
- Computer ScienceNature Neuroscience
- 2019
It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.
BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity
- Computer ScienceNeurocomputing
- 2019
Towards Biologically Plausible Deep Learning
- Computer ScienceArXiv
- 2015
The theory about the probabilistic interpretation of auto-encoders is extended to justify improved sampling schemes based on the generative interpretation of denoising auto- Encoder, and these ideas are validated on generative learning tasks.
Using Memristors for Robust Local Learning of Hardware Restricted Boltzmann Machines
- Computer ScienceScientific Reports
- 2019
This work proposes a pulse width selection scheme based on the sign of two successive weight updates, and shows that it removes the constraint to precisely tune the initial programming pulse width as a hyperparameter, and brings a partial immunity against the most severe memristive device imperfections.
Learning Without a Global Clock: Asynchronous Learning in a Physics-Driven Learning Network
- Computer ScienceArXiv
- 2022
It is shown that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation, and actually improves performance by allowing the system to better explore the discretized state space of solutions.