Learn More
The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational properties of the analog, low-power data processing observed in biological systems.(More)
The steady-state solution of filamentary memristive switching may be derived directly from the heat equation, modelling vertical and radial heat flow. This solution is shown to provide a continuous and accurate description of the evolution of the filament radius, composition, heat flow, and temperature during switching, and is shown to apply to a large(More)
Nonvolatile redox transistors (NVRTs) based upon Li-ion battery materials are demonstrated as memory elements for neuromorphic computer architectures with multi-level analog states, "write" linearity, low-voltage switching, and low power dissipation. Simulations of backpropagation using the device properties reach ideal classification accuracy.(More)
Neuromemristive systems (NMSs) are gaining traction as an alternative to conventional CMOS-based von Neumann systems because of their greater energy and area efficiency. A proposed NMS accelerator for machine-learning tasks reduced power dissipation by five orders of magnitude, relative to a multicore reduced-instruction set computing processor.
Resistive memories enable dramatic energy reductions for neural algorithms. We propose a general purpose neural architecture that can accelerate many different algorithms and determine the device properties that will be needed to run backpropagation on the neural architecture. To maintain high accuracy, the read noise standard deviation should be less than(More)
We show that, in tantalum oxide resistive memories, activation power provides a multi-level variable for information storage that can be set and read separately from the resistance. These two state variables (resistance and activation power) can be precisely controlled in two steps: (1) the possible activation power states are selected by partially reducing(More)
The brain is capable of massively parallel information processing while consuming only ∼1-100 fJ per synaptic event. Inspired by the efficiency of the brain, CMOS-based neural architectures and memristors are being developed for pattern recognition and machine learning. However, the volatility, design complexity and high supply voltages for CMOS(More)
Neural networks are an increasingly attractive algorithm for natural language processing and pattern recognition applications. Deep networks with >50M parameters made possible by modern GPU clusters operating at <50 pJ per op and more recently, production accelerators capable of <5pJ per operation at the board level. However, with the slowing of CMOS(More)