• Corpus ID: 209376179

The Recurrent Processing Unit: Hardware for High Speed Machine Learning

  title={The Recurrent Processing Unit: Hardware for High Speed Machine Learning},
  author={Heidi Komkov and Alessandro Restelli and Brian R. Hunt and Liam Shaughnessy and Itamar Shani and Daniel P. Lathrop},
Machine learning applications are computationally demanding and power intensive. Hardware acceleration of these software tools is a natural step being explored using various technologies. A recurrent processing unit (RPU) is fast and power-efficient hardware for machine learning under development at the University of Maryland. It is comprised of a recurrent neural network and a trainable output vector as a hardware implementation of a reservoir computer. The reservoir is currently realized on… 


Rapid Time Series Prediction with a Hardware-Based Reservoir Computer
A reservoir computing scheme that has rapid processing speed both by the reservoir and the output layer, and the utility of the technique is demonstrated by training a reservoir to learn the short- and long-term behavior of a chaotic system.
Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
A new computational model for real-time computing on time-varying input that provides an alternative to paradigms based on Turing machines or attractor neural networks, based on principles of high-dimensional dynamical systems in combination with statistical learning theory and can be implemented on generic evolved or found recurrent circuitry.
Reservoir Computing Trends
A brief introduction into basic concepts, methods, insights, current developments, and some applications of RC are given.
Dynamics of analog logic-gate networks for machine learning.
The continuous-time dynamics of networks implemented on Field Programable Gate Arrays are described to use these FPGA networks as ultrafast machine-learning processors, using the technique of reservoir computing.
Experimental demonstration of reservoir computing on a silicon photonics chip.
This work proposes the first integrated passive silicon photonics reservoir and demonstrates that this generic chip can be used to perform arbitrary Boolean logic operations with memory as well as 5-bit header recognition up to 12.5 Gbit s(-1), without power consumption in the reservoir.
Attractor reconstruction by machine learning.
A theoretical framework is presented that describes conditions under which reservoir computing can create an empirical model capable of skillful short-term forecasts and accurate long-term ergodic behavior and argues that the theory applies to certain other machine learning methods for time series prediction.
Phoneme Recognition with Large Hierarchical Reservoirs
It is shown that the recently introduced concept of Reservoir Computing might form the basis of such a methodology, and in a limited amount of time, a reservoir system that can recognize the elementary sounds of continuous speech has been built.
Ultrafast physical generation of random numbers using hybrid Boolean networks.
A high-speed physical random number generator based on a hybrid Boolean network with autonomous and clocked logic gates, realized on a reconfigurable chip that allows for parallel generation of random numbers.
Epileptic seizure detection using Reservoir Computing
It is shown that Reservoir Computing can be successfully applied to perform real-time detection of epileptic seizures in Electroencephalograms (EEGs) and all conditions are met for a fast and reliable real- time detection system.
The''echo state''approach to analysing and training recurrent neural networks
The report introduces a constructive learning algorithm for recurrent neural networks, which modifies only the weights to output units in order to achieve the learning task. key words: recurrent