Java on networks of workstations (JavaNOW): a parallel computing framework inspired by Linda and the Message Passing Interface (MPI)

  title={Java on networks of workstations (JavaNOW): a parallel computing framework inspired by Linda and the Message Passing Interface (MPI)},
  author={G. K. Thiruvathukal and P. Dickens and Shahzad Bhatti},
  journal={Concurr. Pract. Exp.},
Networks of workstations are a dominant force in the distributed computing arena, due primarily to the excellent price/performance ratio of such systems when compared to traditionally massively parallel architectures. It is therefore critical to develop programming languages and environments that can potentially harness the raw computational power availab le on these systems. In this article, we present JavaNOW (Java on Networks of Workstations), a Java based framework for parallel programming… Expand
FastMPJ: a scalable and efficient Java message-passing library
FastMPJ, an efficient message-passing in Java (MPJ) library, boosting Java for HPC by providing high-performance shared memory communications using Java threads and implementing the most widely extended MPI-like Java bindings for a highly productive development. Expand
Coalescing Idle Workstations as a Multiprocessor System using JavaSpaces and Java Web Start
A distributed system which aggregates the unused and usually wasted processing capacity of idle workstations and delivers a powerful yet inexpensive execution environment for computationally intensive applications is described. Expand
GMI: Flexible and Efficient Group Method Invocation for Parallel Programming
The Group Method Invocation model (GMI) allows methods to be invoked either on a single object or on a group of objects, the latter possibly with personalized parameters, providing an efficient group communication mechanism for parallel programming. Expand
Distributed shared arrays: A distributed virtual machine with mobility support for reconfiguration
The programmability of the DSA model is demonstrated in a number of parallel applications and its performance by application benchmark programs is evaluated, in particular, the impact of the coherence granularity and service migration overhead. Expand
Simple, Weakly-coupled, Invisible Middleware (SWIM)
  • M. Bateman, S. Bhatti
  • Computer Science
  • 2011 IEEE International Conference on Advanced Information Networking and Applications
  • 2011
This work presents a proof-of-concept demonstration of a middleware platform that imposes absolutely no constraints on the programmer apart form those used in the programming language itself. Expand
Low‐latency Java communication devices on RDMA‐enabled networks
Efficient low‐level Java communication devices that overcome constraints by fully exploiting the underlying RDMA hardware, providing low‐latency and high‐bandwidth communications for parallel Java applications are presented. Expand
A Complete Bibliography of Publications in Concurrency: Practice and Experience
An automobile speed control system for use with an automobile includes a throttle valve, actuator means operatively connected to the throttle valve for actuating the throttle valves, and electronic control means for storing values dependent on output signals issued from the detector means into the automobile speed memories. Expand


Towards Seamless Computing and Metacomputing in Java
This paper introduces Java// (pronounced Java Parallel), a 100% Java library that provides transparent remote objects as well as asynchronous two-way calls, high reuse potential and high-level synchronization mechanisms and describes a distributed collaborative raytracing test application built using Java//. Expand
Java/DSM: A Platform for Heterogeneous Computing
A system for programming heterogeneous computing environments based upon Java and software distributed shared memory (DSM) that transparently handles both the hardware differences and the distributed nature of the system. Expand
Javelin: Internet-based Parallel Computing using Java
The Javelin architecture is intended to be a substrate on which various programming models may be implemented, and several such models are presented: A Linda Tuple Space, an SPMD programming model with barriers, as well as support for message passing. Expand
Parallel processing on networks of workstations: a fault-tolerant, high performance approach
Using completely novel techniques: eager scheduling, evasive memory layouts and dispersed data management it is possible to build an execution environment for parallel programs on workstation networks, which is neither a fault-tolerant system extended for parallel processing nor is it parallel processing System extended for fault tolerance. Expand
JPVM: network parallel computing in Java
  • A. Ferrari
  • Computer Science
  • Concurr. Pract. Exp.
  • 1998
Initial applications performance results achieved with a prototype JPVM system indicate that the Java-implemented approach can offer good performance at appropriately coarse granularities. Expand
Linda on distributed memory multiprocessors
This dissertation shows that Linda can be made efficient on scalable distributed-memory multiprocessors, and presents a design for implementing Linda's Tuple Space on such machines, and claims that the design results in an efficient implementation. Expand
IceT: Distributed Computing and Java
The aim of the IceT project has been to mutually incorporate approaches and techniques found in Internet programming with established and evolving distributed computing paradigms, which would lead to a natural environment for collaborative computing and an extension to the traditional distributed computing environment. Expand
Integrated Pvm Framework Supports Heterogeneous Network Computing
A recent extension to PVM known as the Heterogeneous Network Computing Environment (HeNCE) is introduced, which summarizes the characteristics of appropriate applications and discusses the current status and availability of PVM. Expand
An Introduction to the MPI Standard
The Message Passing Interface is a portable message-passing standard that facilitates the development of parallel applications and libraries and forms a possible target for compilers of languages such as High Performance Fortran. Expand
ParaWeb: towards world-wide supercomputing
ParaWeb provides extensions to the Java programming environment (through a parallel class library and the Java runtime system) that allow programmers to develop new Java applications with parallelism in mind, or to execute existing Java applications written using Java's multithreading facilities in parallel. Expand