• Publications
  • Influence
Performance evaluation of supercomputers using HPCC and IMB benchmarks
TLDR
The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers - SGI Altix BX2, Cray XI, CRAY Opteron Cluster, Dell Xeon cluster, and NEC SX-8. Expand
  • 20
  • 1
  • PDF
A Primer on Global Internal Tide and Internal Gravity Wave Continuum Modeling in HYCOM and MITgcm
Brian K. Arbic1,2, Matthew H. Alford3, Joseph K. Ansong1,4, Maarten C. Buijsman5, Robert B. Ciotti6, J. Thomas Farrar7, Robert W. Hallberg8, Christopher E. Henze6, Christopher N. Hill9, Conrad A.Expand
  • 21
  • 1
  • PDF
Performance Comparison of Cray X1 and Cray Opteron Cluster with Other Leading Platforms using HPCC and IMB Benchmarks
TLDR
The HPC Challenge (HPCC) benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of six leading supercomputers SGI Altix BX2, Cray X1,Cray Opteron Cluster, Dell Xeon cluster, NEC SX-8 and IBM Blue Gene/L. Expand
  • 3
  • 1
  • PDF
Performance evaluation of supercomputers using HPCC and IMB Benchmarks
TLDR
The HPC Challenge (HPCC) Benchmark suite and the Intel MPI Benchmark (IMB) are used to compare and evaluate the combined performance of processor, memory subsystem and interconnect fabric of five leading supercomputers-SGI Altix BX2, Cray X1,Cray Opteron Cluster, Dell Xeon Cluster, and NEC SX-8. Expand
  • 15
Parallel I/O Performance Characterization of Columbia and NEC SX-8 Superclusters
TLDR
We characterize the parallel I/O performance of two of today's leading parallel supercomputers: the Columbia system at NASA Ames Research Center and the NEC SX-8 supercluster at the University of Stuttgart, Germany. Expand
  • 13
  • PDF
Investigating solution convergence in a global ocean model using a 2048-processor cluster of distributed shared memory machines
TLDR
We describe technical aspects of global ocean model configurations with resolutions up to 1/16◦ (≈ 5 km) that exploit a testbed 2048 Itanium-2 processor SGI Altix system at the NASA Ames Research Center. Expand
  • 34
  • PDF
High Performance Multi-Node File Copies and Checksums for Clustered File Systems
TLDR
Mcp and msum are drop-in replacements for the standard cp and md5sum programs that utilize multiple types of parallelism and other optimizations to achieve maximum copy and checksum performance on clustered file systems. Expand
  • 7
  • PDF
Interconnect performance evaluation of SGI Altix 3700 BX2, Cray XI, Cray Opteron Cluster, and Dell PowerEdge
TLDR
We study the performance of inter-process communication on four state-of-the-art high-speed multiprocessor systems using a set of communication benchmarks. Expand
  • 4
  • PDF
A scalability Study of SGI Clustered XFS Using HDF Based AMR Application
TLDR
We study the scalability of CXFS by HDF based Structured Adaptive Mesh Refinement (AMR) appli- cation for three different block sizes. Expand
  • 2
  • PDF
Impact of the Columbia Supercomputer on NASA Science and Engineering Applications
TLDR
Columbia is a 10,240-processor supercomputer consisting of 20 Altix nodes with 512 processors each, and currently ranked as one of the fastest in the world. Expand
  • 5