• Corpus ID: 195766914

X-CHANGR: Changing Memristive Crossbar Mapping for Mitigating Line-Resistance Induced Accuracy Degradation in Deep Neural Networks

  title={X-CHANGR: Changing Memristive Crossbar Mapping for Mitigating Line-Resistance Induced Accuracy Degradation in Deep Neural Networks},
  author={Amogh Agrawal and Chankyu Lee and Kaushik Roy},
There is widespread interest in emerging technologies, especially resistive crossbars for accelerating Deep Neural Networks (DNNs). Resistive crossbars offer a highly-parallel and efficient matrix-vector-multiplication (MVM) operation. MVM being the most dominant operation in DNNs makes crossbars ideally suited. However, various sources of device and circuit non-idealities lead to errors in the MVM output, thereby reducing DNN accuracy. Towards that end, we propose crossbar re-mapping… 
GENIEx: A Generalized Approach to Emulating Non-Ideality in Memristive Xbars using Neural Networks
A Generalized Approach to Emulating Non-Ideality in Memristive Crossbars using Neural Networks (GENIEx), which accurately captures the data-dependent nature of non-idealities.
SWIPE: Enhancing Robustness of ReRAM Crossbars for In-memory Computing
The SWIPE method is proposed, which achieves high accuracy writes for crossbar-based in-memory architectures at 5×-to-10× lower cost than standard program-verify methods and can be augmented with injection-based training methods in order to achieve even greater enhancements in robustness.
Design of High Robustness BNN Inference Accelerator Based on Binary Memristors
A select column scheme for the BNN inference accelerator that shows high robustness and achieves a high recognition accuracy of 98.31% on the MNIST data set is demonstrated and a bit error model is proposed to investigate the impact of device error on recognition accuracy.
Magnetoresistive Circuits and Systems: Embedded Non-Volatile Memory to Crossbar Arrays
Various tradeoffs and design challenges of MRAM are discussed in three broad application areas: 1) embedded non-volatile memory (eNVMs), 2) crossbar-based analog in-memory computing, and 3) stochastic computing.
WOx-Based Synapse Device With Excellent Conductance Uniformity for Hardware Neural Networks
Hardware neural networks (HNNs) which use synapse device (SD) arrays show promise as an approach to energy efficient parallel computation of massive vector-matrix multiplication. To maximize the
Resistive Crossbars as Approximate Hardware Building Blocks for Machine Learning: Opportunities and Challenges
This work describes the design principles of resistive crossbars, including the devices and associated circuits that constitute them, and discusses intrinsic approximations arising from the device and circuit characteristics and study their functional impact on the MVM operation.
In-Memory Computing in Emerging Memory Technologies for Machine Learning: An Overview
An overview of in-memory computing in NVM crossbars for ML workloads is presented and how the high storage density of N VM crossbars can enable spatially distributed architectures is presented.
Efficiency-driven Hardware Optimization for Adversarially Robust Neural Networks
This paper shows how the bit-errors in the 6T cells of hybrid 6T-8T memories minimize the adversarial perturbations in a DNN, and finds that for different configurations of 8T-6T ratios and scaled Vdd operation, noise incurred in the hybrid memory architectures is bound within specific limits.
CASH-RAM: Enabling In-Memory Computations for Edge Inference Using Charge Accumulation and Sharing in Standard 8T-SRAM Arrays
This paper proposes an in-memory computing primitive for accelerating dot-products within standard 8T-SRAM caches, using charge-sharing, and shows that using the proposed compensation approaches, the accuracy degradation is within 1% and 5% of the baseline accuracy, for the MNIST and CIFAR-10 dataset.
DetectX—Adversarial Input Detection Using Current Signatures in Memristive XBar Arrays
The experiments show that DetectX is 10x-25x more energy efficient and immune to dynamic adversarial attacks compared to previous state-of-the-art works, and achieves high detection performance for strong white-box and black-box attacks.


RxNN: A Framework for Evaluating Deep Neural Networks on Resistive Crossbars
This article presents RxNN, a fast and accurate simulation framework to evaluate large-scale DNNs on resistive crossbar systems, and implements RxNN by extending the Caffe machine learning framework, which demonstrates that RxNN enables fast model-in-the-loop retraining of Dnns to partially mitigate the accuracy degradation.
Accelerator-friendly neural-network training: Learning variations and defects in RRAM crossbar
  • Lerong Chen, Jiawen Li, +4 authors Li Jiang
  • Engineering, Computer Science
    Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017
  • 2017
This paper proposes an accelerator-friendly neural-network training method, by leveraging the inherent self-healing capability of the neural- network, to prevent the large-weight synapses from being mapped to the abnormal memristors based on the fault/variation distribution in the RRAM crossbar.
Technology Aware Training in Memristive Neuromorphic Systems for Nonideal Synaptic Crossbars
This paper builds mathematical models of various nonidealities that occur in crossbar implementations such as source resistance, neuron resistance, and chip-to-chip device variations and analyzes their impact on the classification accuracy of a fully connected network (FCN) and convolutional neural network (CNN) trained with Backpropagation algorithm.
TraNNsformer: Neural network transformation for memristive crossbar based neuromorphic system design
The proposed TraNNsformer is an integrated training framework that transforms DNNs to enable their efficient realization on MCA-based systems and is a technology-aware framework that allows mapping a given DNN to any MCA size permissible by the memristive technology for reliable operations.
Rx-Caffe: Framework for evaluating and training Deep Neural Networks on Resistive Crossbars
This work presents a fast and accurate simulation framework to enable training and evaluation of large-scale DNNs on resistive crossbar based hardware fabrics and proposes a Fast Crossbar Model (FCM) that accurately captures the errors arising due to non-idealities while being four-to-five orders of magnitude faster than circuit simulation.
Memristive Crossbar Mapping for Neuromorphic Computing Systems on 3D IC
E3D-FNC is proposed, an enhanced three-dimesnional (3D) floorplanning framework for neuromorphic computing systems, in which the neuron clustering and the layer assignment are considered interactively and can achieve highly hardware-efficient designs compared to the state of the art.
Overview of Selector Devices for 3-D Stackable Cross Point RRAM Arrays
Cross point RRAM arrays is the emerging area for future memory devices due to their high density, excellent scalability. Sneak path problem is the main disadvantage of cross point structures which
A Memristor Crossbar Based Computing Engine Optimized for High Speed and Accuracy
  • Chenchen Liu, Qing Yang, +7 authors Hai Helen Li
  • Computer Science
    2016 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)
  • 2016
This work proposes a new memristor crossbar based computing engine design by leveraging a current sensing scheme and increases the recognition accuracy 8.1% (to 94.6%) and the performance and effectiveness were examined through the implementation of a neural network for pattern recognition based on MNIST database.
Memristor Crossbar-Based Neuromorphic Computing System: A Case Study
The results show that the hardware-based training scheme proposed in the paper can alleviate and even cancel out the majority of the noise issue and apply it to brain-state-in-a-box (BSB) neural networks.
ISAAC: A Convolutional Neural Network Accelerator with In-Situ Analog Arithmetic in Crossbars
This work explores an in-situ processing approach, where memristor crossbar arrays not only store input weights, but are also used to perform dot-product operations in an analog manner.