# Experimentally realized in situ backpropagation for deep learning in nanophotonic neural networks

@article{Pai2022ExperimentallyRI, title={Experimentally realized in situ backpropagation for deep learning in nanophotonic neural networks}, author={Sunil Pai and Zhanghao Sun and Tyler W. Hughes and Taewon Park and Ben Bartlett and Ian A. D. Williamson and Momchil Minkov and Maziyar Milanizadeh and Nathnael Abebe and Francesco Morichetti and Andrea Melloni and Shanhui Fan and Olav Solgaard and David A. B. Miller}, journal={ArXiv}, year={2022}, volume={abs/2205.08501} }

Neural networks are widely deployed models across many scientiﬁc disciplines and commercial endeavors ranging from edge computing and sensing to large-scale signal processing in data cen-ters. The most eﬃcient and well-entrenched method to train such networks is backpropagation, or reverse-mode automatic diﬀerentiation. To counter an exponentially increasing energy budget in the artiﬁcial intelligence computing sector, there has been recent interest in analog implementations of neural networks…

## Figures from this paper

## 2 Citations

Netcast: Low-Power Edge Computing with WDM-defined Optical Neural Networks

- Computer ScienceArXiv
- 2022

This paper analyzes the performance and energy of Netcast, a recently proposed optical neural-network architecture designed for edge computing, and derives analytic expressions for these limits and performs numerical simulations to verify these bounds.

Experimental evaluation of digitally-verifiable photonic computing for blockchain and cryptocurrency

- Computer ScienceArXiv
- 2022

This paper theoretically and experimentally demonstrates that a more general family of robust discrete analog cryptographic hash functions, which it introduces as LightHash, leverages integer matrix-vector operations on photonic mesh networks of interferometers to preserve inherent security guarantees present in the Bitcoin protocol.

## References

SHOWING 1-10 OF 66 REFERENCES

Efficient On-Chip Training of Optical Neural Networks Using Genetic Algorithm

- Computer ScienceACS Photonics
- 2021

This work experimentally demonstrates an efficient, physics-agnostic, and closed-loop protocol for training optical neural networks on chip that works for various types of chip structures and is especially helpful to those that cannot be analytically decomposed and characterized.

Parallel Programming of an Arbitrary Feedforward Photonic Network

- Computer ScienceIEEE Journal of Selected Topics in Quantum Electronics
- 2020

A graph-topological approach is introduced that defines the general class of feedforward networks and identifies columns of non-interacting nodes that can be adjusted simultaneously by simultaneously nullifying the power in one output of each node via optoelectronic feedback onto adjustable phase shifters or couplers.

Deep physical neural networks trained with backpropagation

- Computer ScienceNature
- 2022

This work introduces a hybrid in situ–in silico algorithm, called physics-aware training, that applies backpropagation to train controllable physical systems to train deep physical neural networks, even when the physical layers lack any mathematical isomorphism to conventional artificial neural network layers.

Parallel fault-tolerant programming of an arbitrary feedforward photonic network

- Computer ScienceArXiv
- 2019

A graph-topological approach is introduced that defines the general class of feedforward networks commonly used in such applications and identifies columns of non-interacting nodes that can be adjusted simultaneously and can reduce the programming time by a factor of order $N$ to being proportional to the optical depth (or number of node columns in the device).

Hybrid training of optical neural networks

- Computer ScienceOptica
- 2022

This work demonstrates hybrid training of optical neural networks where the weight matrix is trained with neuron activation functions computed optically via forward propagation through the network, and shows that hybrid training is robust against diﬀerent kinds of static noise.

Photonic Multiply-Accumulate Operations for Neural Networks

- Computer ScienceIEEE Journal of Selected Topics in Quantum Electronics
- 2020

This work describes the performance of photonic and electronic hardware underlying neural network models using multiply-accumulate operations, and investigates the limits of analog electronic crossbar arrays and on-chip photonic linear computing systems.

Reprogrammable Electro-Optic Nonlinear Activation Functions for Optical Neural Networks

- Computer Science, PhysicsIEEE Journal of Selected Topics in Quantum Electronics
- 2020

An electro-optic hardware platform for nonlinear activation functions in optical neural networks is introduced, allowing for complete nonlinear on–off contrast in transmission at relatively low optical power thresholds and eliminates the requirement of having additional optical sources between each of the layers of the network.

Hardware error correction for programmable photonics

- PhysicsArXiv
- 2021

A deterministic approach to correcting circuit errors by locally correcting hardware errors within individual optical gates is presented and applied to simulations of large scale optical neural networks and infinite impulse response filters implemented in programmable photonics, finding that they remain resilient to component error well beyond modern day process tolerances.

Reinforcement and backpropagation training for an optical neural network using self-lensing effects

- PhysicsIEEE Trans. Neural Networks Learn. Syst.
- 2000

The obtained results lay the ground work for the implementation of multilayer neural networks that are trained using optical error backpropagation and are able to solve more complex problems.

An optical neural chip for implementing complex-valued neural network

- Computer ScienceNature communications
- 2021

Strong learning capabilities (i.e., high accuracy, fast convergence and the capability to construct nonlinear decision boundaries) are achieved by the complex-valued ONC compared to its real-valued counterpart.