Henry M. Monti

Learn More
—Designing cloud computing setups is a challenging task. It involves understanding the impact of a plethora of parameters ranging from cluster configuration, partitioning , networking characteristics, and the targeted applications' behavior. The design space, and the scale of the clusters, make it cumbersome and error-prone to test different cluster(More)
—Modern High Performance Computing (HPC) applications process very large amounts of data. A critical research challenge lies in transporting input data to the HPC center from a number of distributed sources, e.g., scientific experiments and web repositories, etc., and offloading the result data to geographically distributed, intermittently available(More)
High performance computing is facing an exponential growth in job output dataset sizes. This implies a significant commitment of supercomputing center resources---most notably, precious scratch space---in handling data staging and offloading. However, the scratch area is typically managed using simple "purge policies", without sophisticated "end-user data(More)
To sustain emerging data-intensive scientific applications, High Performance Computing (HPC) centers invest a notable fraction of their operating budget on a specialized fast storage system, scratch space, which is designed for storing the data of currently running and soon-to-run HPC jobs. Instead, it is often used as a standard file system, wherein users(More)
—Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end computing systems. Processing such large input data typically involves copying (or staging) onto the supercomputer's specialized high-speed storage, scratch space, for sustained high I/O throughput. The current practice of conservatively staging data(More)
Modern High-Performance Computing applications are consuming and producing an exponentially increasing amount of data. This increase has lead to a significant number of resources being dedicated to data staging in and out of Supercomputing Centers. The typical approach to staging is a direct transfer of application data between the center and the(More)
—Innovative scientific applications and emerging dense data sources are creating a data deluge for high-end supercomputing systems. Modern applications are often collaborative in nature, with a distributed user base for input and output data sets. Processing such large input data typically involves copying (or staging) the data onto the supercomputer's(More)
—Modern High-Performance Computing (HPC) centers are facing a data deluge from emerging scientific applications. Supporting large data entails a significant commitment of the high-throughput center storage system, scratch space. However, the scratch space is typically managed using simple " purge policies, " without sophisticated end-user data services to(More)
  • 1