Diminished-1 Fermat Number Transform for Integer Convolutional Neural Networks

@article{Baozhou2019Diminished1FN,
  title={Diminished-1 Fermat Number Transform for Integer Convolutional Neural Networks},
  author={Zhu Baozhou and Nauman Ahmed and Johan Peltenburg and Koen Bertels and Zaid Al-Ars},
  journal={2019 IEEE 4th International Conference on Big Data Analytics (ICBDA)},
  year={2019},
  pages={47-52}
}
Convolutional Neural Networks (CNNs) are a class of widely used deep artificial neural networks. However, training large CNNs to produce state-of-the-art results can take a long time. In addition, we need to reduce compute time of the inference stage for trained networks to make it accessible for real time applications. In order to achieve this, integer number formats INT8 and INT16 with reduced precision are being used to create Integer Convolutional Neural Networks (ICNNs) to allow them to be… CONTINUE READING

References

Publications referenced by this paper.
SHOWING 1-10 OF 16 REFERENCES

Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference

  • 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2017
VIEW 1 EXCERPT

Google takes unconventional route with homegrown machine learning chips

S. Higginbotham
  • Next Platform, May, 2016.
  • 2016
VIEW 1 EXCERPT

Nvidia pushes deep learning inference with new pascal gpus

T. Morgan
  • Next Platform, September, 2016.
  • 2016
VIEW 1 EXCERPT

Fast Algorithms for Convolutional Neural Networks

  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2015
VIEW 2 EXCERPTS