Learn More
Summary form only given. The goal of the Grid Application Development Software (GrADS) project is to provide programming tools and an execution environment to ease program development for the grid. We present recent extensions to the GrADS software framework: 1. A new approach to scheduling workflow computations, applied to a 3D image reconstruction(More)
High Performance Fortran (HPF) is a high-level data-parallel programming system based on Fortran. The effort to standardize HPF began in 1991, at the Supercomputing Conference in Albuquerque, where a group of industry leaders asked Ken Kennedy to lead an effort to produce a common programming language for the emerging class of distributed-memory parallel(More)
The OpenSHMEM community would like to announce a new effort to standardize SHMEM, a communications library that uses one-sided communication and utilizes a partitioned global address space. OpenSHMEM is an effort to bring together a variety of SHMEM and SHMEM-like implementations into an open standard using a community-driven model. By creating an(More)
An effective data-parallel prog ra m m ing environment will use a variety of tools that support the development of efficient data-parallel p rog ra ms w h ile insulating the programmer from the intricacies of the explicitly parallel code. ver the past decade, researchers have sought to provide an efficient machine-independent programming system that(More)
Complex scientific workflows are now Increasingly executed on computational grids. In addition to the challenges of managing and scheduling these workflows, reliability challenges arise because of the unreliable nature of large-scale grid infrastructure. Fault tolerance mechanisms like over-provisioning and checkpoint-recovery are used in current grid(More)
It is widely acknowledged in high-performance computing circles that parallel input/output needs substantial improvement in order to make scalable computers truly usable. We present a data storage model that allows processors independent access to their own data and a corresponding compilation strategy that integrates data-parallel computation with data(More)