Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks

@inproceedings{Elsayed2017HardwareEfficientOL,
  title={Hardware-Efficient On-line Learning through Pipelined Truncated-Error Backpropagation in Binary-State Networks},
  author={Hesham Mostafa Elsayed and Bruno U. Pedroni and Sadique Sheik and Gert Cauwenberghs},
  booktitle={Front. Neurosci.},
  year={2017}
}
Artificial neural networks (ANNs) trained using backpropagation are powerful learning architectures that have achieved state-of-the-art performance in various benchmarks. Significant effort has been devoted to developing custom silicon devices to accelerate inference in ANNs. Accelerating the training phase, however, has attracted relatively little attention. In this paper, we describe a hardware-efficient on-line learning technique for feedforward multi-layer ANNs that is based on pipelined… CONTINUE READING
2
Twitter Mentions

Citations

Publications citing this paper.
SHOWING 1-2 OF 2 CITATIONS

Recursive Binary Neural Network Training Model for Efficient Usage of On-Chip Memory

  • IEEE Transactions on Circuits and Systems I: Regular Papers
  • 2019
VIEW 4 EXCERPTS
CITES BACKGROUND & METHODS
HIGHLY INFLUENCED

FPGA-based Acceleration of Binary Neural Network Training with Minimized Off-Chip Memory Access

Pavan Kumar Chundi, Peiye Liu, Sangsu Park, Seho Lee, Mingoo Seok
  • 2019 IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED)
  • 2019
VIEW 1 EXCERPT
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 47 REFERENCES

Supervised Learning Based on Temporal Coding in Spiking Neural Networks

  • IEEE Transactions on Neural Networks and Learning Systems
  • 2018
VIEW 1 EXCERPT

Fast classification using sparsely active spiking networks

  • 2017 IEEE International Symposium on Circuits and Systems (ISCAS)
  • 2017
VIEW 1 EXCERPT