• Corpus ID: 231749524

Bit Error Tolerance Metrics for Binarized Neural Networks

@article{Buschjger2021BitET,
  title={Bit Error Tolerance Metrics for Binarized Neural Networks},
  author={Sebastian Buschj{\"a}ger and Jian-Jia Chen and Kuan-Hsun Chen and Mario G{\"u}nzel and Katharina Morik and Rodion Novkin and Lukas Pfahler and Mikail Yayla},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.01344}
}
To reduce the resource demand of neural network (NN) inference systems, it has been proposed to use approximate memory, in which the supply voltage and the timing parameters are tuned trading accuracy with energy consumption and performance. Tuning these parameters aggressively leads to bit errors, which can be tolerated by NNs when bit flips are injected during training. However, bit flip training, which is the state of the art for achieving bit error tolerance, does not scale well; it leads… 
1 Citations

Figures and Tables from this paper

FeFET-Based Binarized Neural Networks Under Temperature-Dependent Bit Errors

TLDR
The temperature-dependent bit error model of FeFET memories is revealed, its effect on BNN accuracy is evaluated, and two countermeasures are proposed.

References

SHOWING 1-10 OF 14 REFERENCES

Outstanding Bit Error Tolerance of Resistive RAM-Based Binarized Neural Networks

TLDR
This work shows that BNNs can tolerate bit errors to an outstanding level, through simulations of networks on the MNIST and CIFAR10 tasks, and can be extended to a bit error rate of 4 × 10−2, which can allow reduce RRAM programming energy by a factor 30.

SRAM voltage scaling for energy-efficient convolutional neural networks

  • Lita YangB. Murmann
  • Computer Science
    2017 18th International Symposium on Quality Electronic Design (ISQED)
  • 2017
TLDR
This paper extensively study the effectiveness of exploiting the error resilience of ConvNets and accept bit errors under reduced supply voltages and shows that further savings are possible by injecting bit errors during ConvNet training.

Implementing Binarized Neural Networks with Magnetoresistive RAM without Error Correction

TLDR
For BNNs, ST-MRAMs can be programmed with weak (low-energy) programming conditions, without error correcting codes, and it is shown that this result can allow the use of low energy and low area ST- MRAM cells, and the energy savings at the system level can reach a factor two.

Binarized Neural Networks

TLDR
A binary matrix multiplication GPU kernel is written with which it is possible to run the MNIST BNN 7 times faster than with an unoptimized GPU kernel, without suffering any loss in classification accuracy.

Fault and Error Tolerance in Neural Networks: A Review

TLDR
A survey on fault tolerance in neural networks manly focusing on well-established passive techniques to exploit and improve, by design, such potential but limited intrinsic property in neural models, particularly for feedforward neural networks is presented.

EIE: Efficient Inference Engine on Compressed Deep Neural Network

  • Song HanXingyu Liu W. Dally
  • Computer Science
    2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA)
  • 2016
TLDR
An energy efficient inference engine (EIE) that performs inference on this compressed network model and accelerates the resulting sparse matrix-vector multiplication with weight sharing and is 189x and 13x faster when compared to CPU and GPU implementations of the same DNN without compression.

EDEN: Enabling Energy-Efficient, High-Performance Deep Neural Network Inference Using Approximate DRAM

TLDR
EDEN is the first general framework that reduces DNN energy consumption and DNN evaluation latency by using approximate DRAM devices, while strictly meeting a user-specified target DNN accuracy, and reliably improves the error resiliency of the DNN by an order of magnitude.

STT-RAM Buffer Design for Precision-Tunable General-Purpose Neural Network Accelerator

TLDR
The concept of capacity/precision-tunable STT-RAM memory with the emerging reconfigurable deep NNA and elaborate on the data mapping and storage mode switching policy in STT -RAM memory to achieve the best energy efficiency of approximate computing are demonstrated.

Penalty terms for fault tolerance

  • P. J. EdwardsA. Murray
  • Computer Science
    Proceedings of International Conference on Neural Networks (ICNN'97)
  • 1997
TLDR
Results from MLPs trained on two problems, one artificial and the other a real world task, show that fault tolerance can be achieved for a realistic fault model via the use of penalty terms.

A novel learning algorithm which improves the partial fault tolerance of multilayer neural networks