Andrew G. Schmidt

Learn More
As FPGA resources continue to increase, FPGAs present attractive features to the High Performance Computing community. These include the power-efficient computation and application-specific acceleration benefits, as well as tighter integration between compute and I/O resources. This paper considers the ability of an FPGA to address another, increasingly(More)
T HE BUREAU of Economic Analysis (BEA) re­ search and development (R&D) satellite account provides detailed statistics designed to facilitate re­ search into the effects of R&D on the economy. The ac­ count shows how gross domestic product (GDP) and other measures would be affected if R&D spending were " capitalized, " that is, if R&D spending were treated(More)
While medium- and large-sized computing centers have increasingly relied on clusters of commodity PC hardware to provide cost-effective capacity and capability, it is not clear that this technology will scale to the PetaFLOP range. It is expected that semiconductor technology will continue its exponential advancements over next fifteen years; however, new(More)
The Reconfigurable Computing Cluster Project at the University of North Carolina at Charlotte is investigating the feasibility of using FPGAs as compute nodes to scale to PetaFLOP computing. To date the Spirit cluster, consisting of 64 FPGAs, has been assembled for the initial analysis. One important question is how to efficiently communicate among compute(More)
This short paper describes a remote laboratory facility for platform FPGA education. With the addition of an inexpensive piece of hardware, many commercial off-the-shelf FPGA development boards can be made suitable for use in a remote laboratory. The hardware and software required to implement a remote laboratory has been developed and a remote laboratory(More)
Message-Passing is the dominant programming model for distributed memory parallel computers and Message- Passing Interface (MPI) is the standard. Along with pointto- point send and receive message primitives, MPI includes a set of collective communication operations that are used to synchronize and coordinate groups of tasks. The MPI_Barrier, one of the(More)