Demonstration of Decentralized Physics-Driven Learning

@article{Dillavou2021DemonstrationOD,
  title={Demonstration of Decentralized Physics-Driven Learning},
  author={Sam Dillavou and Menachem Stern and Andrea J. Liu and Douglas J. Durian},
  journal={Physical Review Applied},
  year={2021}
}
In typical artificial neural networks, neurons adjust according to global calculations of a central processor, but in the brain neurons and synapses self-adjust based on local information. Contrastive learning algorithms have recently been proposed to train physical systems, such as fluidic, mechanical, or electrical networks, to perform machine learning tasks from local evolution rules. However, to date such systems have only been implemented in silico due to the engineering challenge of… 

Figures from this paper

Learning Without a Global Clock: Asynchronous Learning in a Physics-Driven Learning Network

It is shown that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation, and actually improves performance by allowing the system to better explore the discretized state space of solutions.

Agnostic Physics-Driven Deep Learning

This work establishes that a physical system can perform statistical learning without gradient computations, via an Agnostic Equilibrium Propagation (Æqprop) procedure that combines energy

Learning by non-interfering feedback chemical signaling in physical networks

This work proposes a new learning algorithm rooted in chemical signaling that does not require storage of two different states and proves that the algorithm performs gradient descent.

Deep physical neural networks trained with backpropagation

A hybrid in situ–in silico algorithm that applies backpropagation is used to train layers of controllable physical systems to carry out calculations like deep neural networks, but accounting for real-world noise and imperfections.

Out of equilibrium learning dynamics in physical allosteric resistor networks

Physical networks can learn desirable functions using local learning rules in space and time. Real learning systems, like natural neural networks, can learn out of equilibrium, on timescales

Model architecture can transform catastrophic forgetting into positive transfer

This work introduces a neural network that is able to learn an algorithm and emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and improves its predictive power on unseen pairs of numbers as training progresses.

Network architecture determines vein fate during spontaneous reorganization, with a time delay

Network-wide vein dynamics and shear during spontaneous reorganization in the prototypical vascular networks of Physarum polycephalum are resolved and a model for vascular adaptation is derived, based on force balance at the vein walls, which reproduce the diversity of experimentally observed vein dynamics, and confirms the role of network architecture.

Using binary-stiffness beams within mechanical neural-network metamaterials to learn

This work introduces the concept of applying binary-stiffness beams within a lattice to achieve a mechanical neural-network (MNN) metamaterial that learns its behaviors and properties with prolonged

Photonic online learning: a perspective

It is argued that some form of online learning will be necessary if photonic neuromorphic hardware is to achieve its true potential, and the online learning paradigm is examined.

Vein fate determined by flow-based but time-delayed integration of network architecture

Veins in vascular networks, such as in blood vasculature or leaf networks, continuously reorganize, grow or shrink, to minimize energy dissipation. Flow shear stress on vein walls has been set forth

References

SHOWING 1-10 OF 62 REFERENCES

A deep learning theory for neural networks grounded in physics

It is argued that building large, fast and efficient neural networks on neuromorphic architectures requires rethinking the algorithms to implement and train them, and an alternative mathematical framework is presented, also compatible with SGD, which offers the possibility to design neural networks in substrates that directly exploit the laws of physics.

Reinforcement learning with analogue memristor arrays

An experimental demonstration of reinforcement learning on a three-layer 1-transistor 1-memristor (1T1R) network using a modified learning algorithm tailored for the authors' hybrid analogue–digital platform, which has the potential to achieve a significant boost in speed and energy efficiency.

Supervised Learning in Physical Networks: From Machine Learning to Learning Machines

By applying and adapting advances of statistical learning theory to the physical world, the plausibility of new classes of smart metamaterials capable of adapting to users' needs in-situ is demonstrated.

Memristive neural network for on-line learning and tracking with brain-inspired spike timing dependent plasticity

Unsupervised learning of a static pattern and tracking of a dynamic pattern of up to 4 × 4 pixels are demonstrated, paving the way for intelligent hardware technology with up-scaled memristive neural networks.

A deep learning framework for neuroscience

It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.

BP-STDP: Approximating Backpropagation using Spike Timing Dependent Plasticity

Towards Biologically Plausible Deep Learning

The theory about the probabilistic interpretation of auto-encoders is extended to justify improved sampling schemes based on the generative interpretation of denoising auto- Encoder, and these ideas are validated on generative learning tasks.

Using Memristors for Robust Local Learning of Hardware Restricted Boltzmann Machines

This work proposes a pulse width selection scheme based on the sign of two successive weight updates, and shows that it removes the constraint to precisely tune the initial programming pulse width as a hyperparameter, and brings a partial immunity against the most severe memristive device imperfections.

Learning Without a Global Clock: Asynchronous Learning in a Physics-Driven Learning Network

It is shown that desynchronizing the learning process does not degrade performance for a variety of tasks in an idealized simulation, and actually improves performance by allowing the system to better explore the discretized state space of solutions.
...