Corpus ID: 209376179

The Recurrent Processing Unit: Hardware for High Speed Machine Learning

@article{Komkov2019TheRP,
  title={The Recurrent Processing Unit: Hardware for High Speed Machine Learning},
  author={Heidi Komkov and Alessandro Restelli and Brian R. Hunt and Liam Shaughnessy and Itamar Shani and Daniel P. Lathrop},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.07363}
}
Machine learning applications are computationally demanding and power intensive. Hardware acceleration of these software tools is a natural step being explored using various technologies. A recurrent processing unit (RPU) is fast and power-efficient hardware for machine learning under development at the University of Maryland. It is comprised of a recurrent neural network and a trainable output vector as a hardware implementation of a reservoir computer. The reservoir is currently realized on… Expand

References

SHOWING 1-10 OF 14 REFERENCES
Rapid Time Series Prediction with a Hardware-Based Reservoir Computer
TLDR
A reservoir computing scheme that has rapid processing speed both by the reservoir and the output layer, and the utility of the technique is demonstrated by training a reservoir to learn the short- and long-term behavior of a chaotic system. Expand
Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
TLDR
A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry. Expand
Reservoir Computing Trends
TLDR
A brief introduction into basic concepts, methods, insights, current developments, and some applications of RC are given. Expand
Dynamics of analog logic-gate networks for machine learning.
TLDR
The continuous-time dynamics of networks implemented on Field Programable Gate Arrays are described to use these FPGA networks as ultrafast machine-learning processors, using the technique of reservoir computing. Expand
Experimental demonstration of reservoir computing on a silicon photonics chip.
TLDR
This work proposes the first integrated passive silicon photonics reservoir and demonstrates that this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s(-1), without power consumption in the reservoir. Expand
Attractor reconstruction by machine learning.
TLDR
A theoretical framework is presented that describes conditions under which reservoir computing can create an empirical model capable of skillful short-term forecasts and accurate long-term ergodic behavior and argues that the theory applies to certain other machine learning methods for time series prediction. Expand
Phoneme Recognition with Large Hierarchical Reservoirs
TLDR
It is shown that the recently introduced concept of Reservoir Computing might form the basis of such a methodology, and in a limited amount of time, a reservoir system that can recognize the elementary sounds of continuous speech has been built. Expand
Ultrafast physical generation of random numbers using hybrid Boolean networks.
TLDR
A high-speed physical random number generator based on a hybrid Boolean network with autonomous and clocked logic gates, realized on a reconfigurable chip that allows for parallel generation of random numbers. Expand
Epileptic seizure detection using Reservoir Computing
TLDR
It is shown that Reservoir Computing can be successfully applied to perform real-time detection of epileptic seizures in Electroencephalograms (EEGs) and all conditions are met for a fast and reliable real- time detection system. Expand
The''echo state''approach to analysing and training recurrent neural networks
The report introduces a constructive learning algorithm for recurrent neural networks, which modifies only the weights to output units in order to achieve the learning task. key words: recurrentExpand
...
1
2
...