Training Recurrent Neural Networks against Noisy Computations during Inference

@article{Qin2018TrainingRN,
  title={Training Recurrent Neural Networks against Noisy Computations during Inference},
  author={Minghai Qin and Dejan Vu{\vc}ini{\'c}},
  journal={2018 52nd Asilomar Conference on Signals, Systems, and Computers},
  year={2018},
  pages={71-75}
}
  • Minghai Qin, D. Vučinić
  • Published 17 July 2018
  • Computer Science
  • 2018 52nd Asilomar Conference on Signals, Systems, and Computers
We explore the robustness of recurrent neural networks when the computations within the network are noisy. One of the motivations for looking into this problem is to reduce the high power cost of conventional computing of neural network operations through the use of analog neuromorphic circuits. Traditional GPU/CPU-centered deep learning architectures exhibit bottlenecks in power-restricted applications, such as speech recognition in embedded systems. The use of specialized neuromorphic… 
Deep Learning for Compute in Memory
  • Computer Science
  • 2020
TLDR
This work shows how fundamental hardware design choices influence the predictive performance of neural networks and how training these models to be hardwareaware can make them more robust for CIM deployment.
Neural Network Training With Stochastic Hardware Models and Software Abstractions
TLDR
S-DDHR is demonstrated and evaluated for a bit-scalable MRAM-based in-memory computing architecture, whose energy/throughput trade-offs explicitly motivate statistical computations.
Stochastic Data-driven Hardware Resilience to Efficiently Train Inference Models for Stochastic Hardware Implementations
TLDR
S-DDHR successfully address different samples of stochastic hardware, which would otherwise suffer degraded performance due to hardware variability, for an in-memory-computing architecture based on magnetoresistive random-access memory (MRAM).
Impact of Medical Data Imprecision on Learning Results
TLDR
A model for data imprecisions is formed using parameters to control the degree of imprecision, imprecise samples for comparison experiments can be generated using this model and a group of measures are defined to evaluate the different impacts quantitatively.
Effect of Batch Normalization on Noise Resistant Property of Deep Learning Models
TLDR
The results show that the presence of batchnormalization layer negatively impacts noise resistant property of deep learning model and the impact grows with the increase of the number of batch normalization layers.

References

SHOWING 1-10 OF 18 REFERENCES
Long short-term memory recurrent neural network architectures for large scale acoustic modeling
TLDR
The first distributed training of LSTM RNNs using asynchronous stochastic gradient descent optimization on a large cluster of machines is introduced and it is shown that a two-layer deep LSTm RNN where each L STM layer has a linear recurrent projection layer can exceed state-of-the-art speech recognition performance.
Making Memristive Neural Network Accelerators Reliable
TLDR
A new error correction scheme for analog neural network accelerators based on arithmetic codes that reduces the respective misclassification rates by 1.5x and 1.1x and encodes the data through multiplication by an integer, which preserves addition operations through the distributive property.
Improving the Robustness of Deep Neural Networks via Stability Training
TLDR
This paper presents a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping.
Deep learning in neural networks: An overview
LSTM: A Search Space Odyssey
TLDR
This paper presents the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling, and observes that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment.
Rectifier Nonlinearities Improve Neural Network Acoustic Models
TLDR
This work explores the use of deep rectifier networks as acoustic models for the 300 hour Switchboard conversational speech recognition task, and analyzes hidden layer representations to quantify differences in how ReL units encode inputs as compared to sigmoidal units.
Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit
TLDR
The model of cortical processing is presented as an electronic circuit that emulates this hybrid operation, and so is able to perform computations that are similar to stimulus selection, gain modulation and spatiotemporal pattern generation in the neocortex.
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced.
Theory of the backpropagation neural network
  • R. Hecht-Nielsen
  • Computer Science
    International 1989 Joint Conference on Neural Networks
  • 1989
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
TLDR
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed that performs a local adaptation of the weight-updates according to the behavior of the error function to overcome the inherent disadvantages of pure gradient-descent.
...
...