MPI: The Complete Reference

@inproceedings{Snir1996MPITC,
  title={MPI: The Complete Reference},
  author={Marc Snir and Steve W. Otto and David W. Walker and Jack J. Dongarra and Steven Huss-Lederman},
  year={1996}
}
From the Publisher: MPI, the Message Passing Interface, is a standard and portable library of communications subroutines for parallel programming designed to function on a wide variety of parallel computers. It is useful on both parallel computers, such as IBM's SP2, the Cray ResearchT3D, and the Connection Machine, as well as networks of workstations. Written by five of the principal creators of the latest MPI standard MPI: The Complete Reference is an annotated manual for the latest 1.1… 
A message passing standard for MPP and workstations
TLDR
Commercial and free, public-domain implementations of MPI have been available since 1994 and are under development, running on both tightly coupled, massively parallel processing (MPP) machines and on networks of workstations (NOWs).
PMPI: High-Level Message Passing in Fortran 77 and C
TLDR
A higher-level Programmer's Message-Passing Interface (PMPI) to the standard MPI libraries that is better suited to the needs of application programmers and has fewer operations than MPI, and with simpler arguments.
A Performance Study of LAM and MPICH on an SMP Cluster
TLDR
Comparing the performance of LAM and MPICH on an SMP Cluster is compared in an effort to provide performance data and analysis of the current releases of each to the cluster computing community and suggest that LAM performs better than MPICH in the cluster environment.
rMPI: Message Passing on Multicore Processors with On-Chip Interconnect
With multicore processors becoming the standard architecture, programmers are faced with the challenge of developing applications that capitalize on multicore's advantages. This paper presents rMPI,
The MPI/OmpSs parallel programming model
TLDR
A new programming model is presented, which allows the programmer to easily introduce the asynchrony necessary to overlap communication and computation and is based on MPI and tasked based shared memory framework, namely OmpSs.
rMPI : an MPI-compliant message passing library for tiled architectures
Next-generation microprocessors will increasingly rely on parallelism, as opposed to frequency scaling, for improvements in performance. Microprocessor designers are attaining such parallelism by
Evaluating and Modeling Communication Overhead of MPI Primitives on the Meiko CS-2
TLDR
A benchmark model of MPI communications is proposed based on the size of messages exchanged and the number of involved processors to evaluate the performance of the point-to-point and broadcast communication primitives of the MPI standard library on the Meiko CS-2 parallel machine.
The Design and Implementation of Message Passing Services for the BlueGene / L Supercomputer
TLDR
Performance measurements show that message-passing services deliver performance close to the hardware limits of the machine, and dedicating one of the processors of a node to communication functions greatly improves the message-Passing bandwidth, whereas running two processes per compute node (virtual node mode) can have a positive impact on application performance.
HMPI: towards a message-passing library for heterogeneous networks of computers
The paper presents Heterogeneous MPI (HMPI), an extension of MPI for programming high-performance computations on heterogeneous networks of computers. It allows the application programmer to describe
...
...