ASCI Red - Experiences and Lessons Learned with aMassively Parallel

Abstract

The Accelerated Strategic Computing Initiative (ASCI) is focused upon advancing three-dimensional, full-physics calculations to the point where \full-system" simulation may be applied to virtual testing. The ASCI program involves Sandia, Los Alamos and Lawrence Livermore National Laboratories. At Sandia National Laboratories, ASCI applications include large deformation transient dynamics, shock propagation, electromechan-ics, and abnormal thermal environments. In order to resolve important physical phenomena in these problems, it is estimated that meshes ranging from 10 6 to 10 9 grid points will be required. The ASCI program is relying on the use of massively parallel supercomputers initially capable of delivering over 1 TFLOPs to perform such demanding computations. The ASCI \Red" machine at Sandia National Laboratories consists of over 4500 computational nodes (over 9000 processors) with a peak computational rate of 1:8 TFLOPs, 567 GBytes of memory, and 2 TBytes of disk storage. Regardless of the peak FLOP rate, there are many issues surrounding the use of massively parallel supercomputers in a \production" environment. These issues include parallel I/O, mesh generation, visualization, archival storage, high-bandwidth networking and the development of parallel algorithms. In order to illustrate these issues and their solution with respect to ASCI Red, demonstration calculations of time-dependent buoyancy-dominated plumes, electromechanics, and shock propagation will be presented. The applications issues and lessons learned to-date on the ASCI Red machine will be discussed.

Cite this paper

@inproceedings{Christon1997ASCIR, title={ASCI Red - Experiences and Lessons Learned with aMassively Parallel}, author={Mark A. Christon and David Andrew Crawford and Eugene S. Hertel and J. S. Peery and Allen C. Robinson}, year={1997} }