Seema Hiranandani

Learn More
:llel computing resents the only :usible way to continue to increase the computational power available to scientists and engineers. Parallel computers, however, are not likely to be widely successful until they are easy to program. A major component in the success of vector supercomputers is the ability of scientists to write Fortran programs in a(More)
The Fortran D compiler uses data decomposition speciications to automatically translate For-tran programs for execution on MIMD distributed-memory machines. This paper introduces and classiies a number of advanced optimizations needed to achieve acceptable performance; they are analyzed and empirically evaluated for stencil computations. Communication(More)
The Fortran D compiler uses data decomposition specifications to automatically translate Fortran programs for execution on MIMD distributed-memory machines. This paper introduces and classifies a number of advanced optimizations needed to achieve acceptable performance; they are analyzed and empirically evaluated for stencil computations. Profitability(More)
Scienti c and engineering applications often involve structured meshes. These meshes may be nested (for multigrid or adaptive codes) and/or irregularly coupled (called Irregularly Coupled Regular Meshes). We have designed and implemented a runtime library for parallelizing this general class of applications on distributed memory parallel machines in an e(More)
Because of the complexity and variety of parallel archi-tectures, an eecient machine-independent parallel programming model is needed to make parallel computing truly usable for scientiic programmers. We believe that Fortran D, a version of Fortran enhanced with data decomposition speciications, can provide such a programming model. This paper presents the(More)
Massively parallel MIMD distributed-memory machines can provide enormous computation power. However, the diiculty of developing parallel programs for these machines has limited their accessibility. This paper presents compiler algorithms to automatically derive eecient message-passing programs based on data decompositions. Optimizations are presented to(More)
Massively parallel MIMD distributed-memory machines can provide enormous computation power. However, the difficulty of developing parallel programs for these machines has limited their accessibility. This paper presents compiler algorithms to automatically derive efficient message-passing programs based on data decompositions. Optimizations are presented to(More)
This paper addresses the issue of compiling concurrent loop nests in the presence of complicated array references and irregularly distributed arrays. Arrays accessed within loops may contain accesses that make it impossible to precisely determine the reference pattern at compile time. This paper proposes a run time support mechanism that is used e ectively(More)
The success of large-scale parallel architectures is limited by the diiculty of developing machine-independent parallel programs. We have developed Fortran D, a version of Fortran extended with data decomposition speciications, to provide a portable data-parallel programming model. This paper presents the design of two key components of the Fortran D(More)