Flexible Modularized Artificial Neural Network Implementation on FPGA

@article{Cosmas2018FlexibleMA,
  title={Flexible Modularized Artificial Neural Network Implementation on FPGA},
  author={Kiruki Cosmas and Ken'ichi Asami},
  journal={2018 5th International Conference on Soft Computing \& Machine Intelligence (ISCMI)},
  year={2018},
  pages={1-5}
}
  • Kiruki Cosmas, K. Asami
  • Published 1 November 2018
  • Computer Science
  • 2018 5th International Conference on Soft Computing & Machine Intelligence (ISCMI)
This work presents a parameterized and modularized approach for the implementation of artificial neural network (ANN) on a field-programmable gate array (FPGA). The design investigates how to efficiently model an ANN that is easily adoptable to various applications with least modifications to the hardware description language (HDL). The Verilog HDL has been used to model the network. Fixed point precision and activation function implementations have been investigated to monitor FPGA resource… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 10 REFERENCES
The Impact of Arithmetic Representation on Implementing MLP-BP on FPGAs: A Study
TLDR
The results show that an MLP-BP network uses less clock cycles and consumes less real estate when compiled in an FXP format, compared with a larger and slower functioning compilation in an FLP format with similar data representation width, in bits, or a similar precision and range.
From high-level deep neural models to FPGAs
TLDR
DnnWeaver is devised, a framework that automatically generates a synthesizable accelerator for a given DNN, FPGA pair from a high-level specification in Caffe that best matches the needs of the DNN while providing high performance and efficiency gains for the target FPGAs.
Efficient digital implementation of the sigmoid function for reprogrammable logic
TLDR
Four previously published piecewise linear and one piecewise second-order approximation of the sigmoid function are compared with SIG-sigmoid, a purely combinational approximation and it is concluded that the best performance is achieved by SIG-Sigmoid.
Design and Analysis of a Hardware CNN Accelerator
TLDR
A systolic array based architecture called ConvAU is designed and implemented to efficiently accelerate dense matrix multiplication operations in CNNs and finds that ConvAU gives a 200x improvement in TOPs/W when compared to a NVIDIA K80 GPU and a 1.9x improvement whenCompared to the TPU.
Piecewise linear approximation applied to nonlinear function of a neural network
An efficient piecewise linear approximation of a nonlinear function (PLAN) is proposed. This uses a simple digital gate design to perform a direct transformation from X to Y, where X is the input and
Efficient Processing of Deep Neural Networks: A Tutorial and Survey
Deep neural networks (DNNs) are currently widely used for many artificial intelligence (AI) applications including computer vision, speech recognition, and robotics. While DNNs deliver
Improving DC Power Supply Efficiency with Neural Network Controller
  • Weiming Li, Xiao-Hua Yu
  • Engineering, Computer Science
    2007 IEEE International Conference on Control and Automation
  • 2007
TLDR
A multi-layer feedforward neural network based controller that has the advantage of adaptive learning ability, and can work under the situation when the input voltage and load current fluctuate is proposed.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Human-Like Hand Reaching by Motion Prediction Using Long Short-Term Memory
TLDR
A motion generation system for humanoid robots to perform interactions with human motion prediction to learn a human motion, a Long Short-Term Memory is trained using a public dataset.