Kailash Gopalakrishnan

Learn More
Storage-class memory (SCM) combines the benefits of a solid-state memory, such as high performance and robustness, with the archival capabilities and low cost of conventional hard-disk magnetic storage. Such a device would require a solid-state nonvolatile memory technology that could be manufactured at an extremely high effective areal density using some(More)
Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations , we observe the rounding scheme to play a crucial role in determining the network's(More)
The memory capacity, computational power, communication bandwidth, energy consumption, and physical size of the brain all tend to scale with the number of synapses, which outnumber neurons by a factor of 10,000. Although progress in cortical simulations using modern digital computers has been rapid, the essential disparity between the classical von Neumann(More)
BEOL-friendly Access Devices (AD) based on Cu-containing MIEC materials[1-4] are integrated in large (512×1024) arrays at 100% yield, and are successfully co-integrated together with Phase Change Memory (PCM). Numerous desirable attributes are demonstrated: the large currents (>200µA) needed for PCM, the bipolar operation required for high-performance RRAM,(More)
This paper highlights new opportunities for designing large-scale machine learning systems as a consequence of blurring traditional boundaries that have allowed algorithm designers and application-level practitioners to stay – for the most part – oblivious to the details of the underlying hardware-level implementations. The hardware/software co-design(More)
  • 1