RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars

@article{Jain2021RxNNAF,
  title={RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars},
  author={Shubham Jain and Abhronil Sengupta and Kaushik Roy and Anand Raghunathan},
  journal={IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems},
  year={2021},
  volume={40},
  pages={326-338}
}
Resistive crossbars have emerged as promising building blocks for realizing DNNs due to their ability to compactly and efficiently realize the dominant DNN computational kernel, viz., vector-matrix multiplication. [] Key Method Finally, we develop RxNN, a software framework to evaluate and re-train DNNs on resistive crossbar systems. RxNN is based on the popular Caffe machine learning framework, and we use it to evaluate a suite of large-scale DNNs developed for the ImageNet Challenge (ILSVRC). Our…

TxSim: Modeling Training of Deep Neural Networks on Resistive Crossbar Systems

TxSim is proposed, a fast and customizable modeling framework to functionally evaluate DNN training on crossbar-based hardware considering the impact of nonidealities and achieves computational efficiency by mapping crossbar evaluations to well-optimized Basic Linear Algebra Subprograms routines and incorporates speedup techniques to further reduce simulation time with minimal impact on accuracy.

X-CHANGR: Changing Memristive Crossbar Mapping for Mitigating Line-Resistance Induced Accuracy Degradation in Deep Neural Networks

This work proposes crossbar re-mapping strategies to mitigate line-resistance induced accuracy degradation in DNNs, without having to re-train the learned weights, unlike most prior works.

Examining the Robustness of Spiking Neural Networks on Non-ideal Memristive Crossbars

This paper conducts a comprehensive analysis of the robustness of SNNs on non-ideal crossbars and shows that repetitive crossbar computations across multiple time-steps induce error accumulation, resulting in a huge performance drop during SNN inference.

Modeling and Mitigating the Interconnect Resistance Issue in Analog RRAM Matrix Computing Circuits

This work develops a physics-based iterative algorithm to quickly model the matrix-vector multiplication (MVM) operation of crosspoint resistive array with interconnect resistances, thus quadratically reducing the time complexity of circuit simulation.

Analysis and mitigation of parasitic resistance effects for analog in-memory neural network acceleration

This work analyzes how parasitic resistance affects the end-to-end inference accuracy of state-of-the-art convolutional neural networks, and comprehensively studies how various design decisions at the device, circuit, architecture, and algorithm levels affect the system’s sensitivity to parasitic resistance effects.

Magnetoresistive Circuits and Systems: Embedded Non-Volatile Memory to Crossbar Arrays

Various tradeoffs and design challenges of MRAM are discussed in three broad application areas: 1) embedded non-volatile memory (eNVMs), 2) crossbar-based analog in-memory computing, and 3) stochastic computing.

Resistive Crossbars as Approximate Hardware Building Blocks for Machine Learning: Opportunities and Challenges

This work describes the design principles of resistive crossbars, including the devices and associated circuits that constitute them, and discusses intrinsic approximations arising from the device and circuit characteristics and study their functional impact on the MVM operation.

NEAT: Non-linearity Aware Training for Accurate and Energy-Efficient Implementation of Neural Networks on 1T-1R Memristive Crossbars

A novel Non-linearity Aware Training (NEAT) method to address the non-idealities of the 1T-1R crossbar and finds that each layer has a different weight distribution and in turn requires different gate voltage of transistor to guarantee linear operation.

SEMULATOR: Emulating the Dynamics of Crossbar Array-based Analog Neural System with Regression Neural Networks

This work proposes a methodology, SEMULATOR (SiMULATOR by Emulating the analog computing block) which uses a deep neural network to emulate the behavior of crossbar-based analog computing system and experimentally and theoretically shows that it emulates a MAC unit for neural computation.

References

SHOWING 1-10 OF 58 REFERENCES

Technology Aware Training in Memristive Neuromorphic Systems for Nonideal Synaptic Crossbars

This paper builds mathematical models of various nonidealities that occur in crossbar implementations such as source resistance, neuron resistance, and chip-to-chip device variations and analyzes their impact on the classification accuracy of a fully connected network (FCN) and convolutional neural network (CNN) trained with Backpropagation algorithm.

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices: Design Considerations

A concept of resistive processing unit (RPU) devices that can potentially accelerate DNN training by orders of magnitude while using much less power is proposed that will be able to tackle Big Data problems with trillions of parameters that is impossible to address today.

Neural network accelerator design with resistive crossbars: Opportunities and challenges

The prospects for designing hardware accelerators for neural networks using resistive crossbars are highlighted and the key open challenges and some possible approaches to address them are underscore.

Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar

This paper proposes an accelerator-friendly neural-network training method, by leveraging the inherent self-healing capability of the neural- network, to prevent the large-weight synapses from being mapped to the abnormal memristors based on the fault/variation distribution in the RRAM crossbar.

Training Deep Convolutional Neural Networks with Resistive Cross-Point Devices

This work shows how to map the convolutional layers to fully connected RPU arrays such that the parallelism of the hardware can be fully utilized in all three cycles of the backpropagation algorithm.

Dot-product engine for neuromorphic computing: Programming 1T1M crossbar to accelerate matrix-vector multiplication

The Dot-Product Engine (DPE) is developed as a high density, high power efficiency accelerator for approximate matrix-vector multiplication, invented a conversion algorithm to map arbitrary matrix values appropriately to memristor conductances in a realistic crossbar array.

X-MANN: A Crossbar based Architecture for Memory Augmented Neural Networks

This work proposes X-MANN, a memory-centric crossbar-based architecture that is specialized to match the compute characteristics observed in MANNs, and designs a transposable crossbar processing unit that can efficiently perform the different computational kernels of MANNs.

Sneak-path based test and diagnosis for 1R RRAM crossbar using voltage bias technique

Voltage bias is used to manipulate various distribution of sneak-paths that can screen one or multiple faults out of a 4 × 4 region of memristors at once, and consequently diagnose the exact location of each faulty memristor within three write-read operations.

Analog CMOS-based resistive processing unit for deep neural network training

An analog CMOS-based RPU design (CMOS RPU) is proposed which can store and process data locally and can be operated in a massively parallel manner and evaluated the functionality and feasibility for acceleration of DNN training.

Mitigating effects of non-ideal synaptic device characteristics for on-chip learning

This study shows that the recognition accuracy of MNIST handwriting digits degrades from ~97 % to ~65 %, and proposes the mitigation strategies, which include the smart programming schemes for achieving linear weight update, a dummy column to eliminate the off-state current, and the use of multiple cells for each weight element to alleviate the impact of device variations.
...