Kristian R. Nichols

Learn More
Aritificial Neural Networks (ANNs) implemented on FieldProgrammable Gate Arrays (FPGAs) have traditionally used a minimal allowable precision of 16-bit fixed-point. This approach is considered to be an optimal precision vs. area tradeoff for FPGA based ANNs because quality of performance is maintained, while making efficient use of the limited hardware(More)
Artificial Neural Networks (ANNs) are inherently parallel architectures which represent a natural fit for custom implementation on FPGAs. One important implementation issue is to determine the numerical precision format that allows an optimum tradeoff between precision and implementation areas. Standard single or double precision floating-point(More)
  • 1