Quantifying Effective Memory Bandwidth of Platform FPGAs

Abstract

The benefits of high performance computing (HPC) can be seen in a wide range of applications. From science and medicine to industries as diverse as oil exploration, financial, and entertainment, access to cost-effective HPC is becoming a critical part of our national infrastructure. Although exponential semiconductor advances are giving computational scientists faster computing speeds, applications with large amounts of data may not necessarily solve problems faster. More specifically, technology trends are working against system designers: computation rates and memory capacity are both rising faster than the bandwidth between these two components. This so called "Memory Wall" was predicted for general-purpose computing over a decade ago, but to date large on-chip caches are able to compensate for the growing disparity. FPGA designers do not have the luxury of large caches, so to be successful, high-performance designs must include custom memory hierarchies and data paths as well as application-specific computations.

DOI: 10.1109/FCCM.2007.47

1 Figure or Table

Cite this paper

@article{Schmidt2007QuantifyingEM, title={Quantifying Effective Memory Bandwidth of Platform FPGAs}, author={Andrew G. Schmidt and Ron Sass}, journal={15th Annual IEEE Symposium on Field-Programmable Custom Computing Machines (FCCM 2007)}, year={2007}, pages={337-338} }