Delirium: an embedding coordination language

  title={Delirium: an embedding coordination language},
  author={Steven E. Lucco and Oliver J. Sharp},
  journal={Proceedings SUPERCOMPUTING '90},
The authors outline a strategy for expressing coordination of sequential subcomputations, realized in the embedding language Delirium. In contrast to existing embedded languages, the notation clearly expresses the coordination framework of the application. All the coordination required to execute the program is expressed in a unified Delirium program. The program contains the computational code in the form of embedded operators, written using conventional tools. The proposed environment, which… 

Figures and Tables from this paper

Linear Logic and Coordination for Parallel Programing

A new declarative programming language, called Linear Meld (LM), is proposed that provides a solution to this problem by supporting data-driven dynamic coordination mechanisms that are semantically equivalent to regular computation.

Coordinating functional processes with Haskell#

The implementation of some well-known applications in Haskell is presented, demonstrating its expressiveness, allowing for elegant, simple, and concise specification of any static pattern of parallel, concurrent or distributed computation.

Orchestrating interactions among parallel computations

This paper develops a methodology for managing the interactions among sub-computations, avoiding strict synchronization where concurrent or pipelined relationships are possible, and demonstrates that these dynamic techniques substantially improve performance on a range of production applications including climate modeling and x-ray tomography.

A Software Environment for Concurrent Coordinated Programming

ConCoord is a software environment for Concurrent Coordinated programming targeted at networks of sequential and parallel machines and provides linguistic support for heterogeneous concurrency exploitation.

A dynamic scheduling method for irregular parallel programs

A fundamental relationship between three quantities that characterize an irregular parallel computation is shown: the total available parallelism, the optimal grain size, and the statistical variance of execution times for individual tasks, which yields a dynamic scheduling algorithm that substantially reduces the overhead of executing irregular parallel operations.

Shared State for Heterogeneous Distributed Systems

To maximize performance on bandwidth-limited networks, InterWeave caches data locally, leverages application-specific coherence requirements to minimize the frequency of updates, and employs two-way diffing to update only those data that have actually changed.

Support for Machine and Language Heterogeneity in a Distributed Shared

This paper focuses on the aspects of InterWeave specifically designed to accommodate heterogeneous machine architectures and languages, and evaluates the performance of the heterogeneity mechanisms, and compares them to comparable mechanisms in RPC-style systems.

Flexible and Efficient Control of Data Transfers for Loosely Coupled Components

A loosely coupled framework to support coupling of parallel and sequential application components and a multi-threaded multi-process control protocol that can be systematically constructed by the composition of sub-tasks protocols is proposed.

Efficient distributed shared state for heterogeneous machine architectures

Experimental results show that InterWeave achieves performance comparable to that of RPC parameter passing when transmitting previously uncached data and its use of platform-independent diffs allows it to significantly outperform the straightforward use of RPC when updating data that have already been cached.

Linguistic support for heterogeneous parallel processing: a survey and an approach

Two essential features are proposed to be included in programming languages that are intended to support heterogeneity in the perspective of programming complex, heterogeneous systems.



The VAL Language: Description and Analysis

Analysis of the language shows that VAL meets the critical needs for a data flow environment, and encourages programmers to think in terms of general concurrency, enhances readability, and possesses a structure amenable to verification techniques.

Tarmac: a language system substrate based on mobile memory

  • S. LuccoD. Anderson
  • Computer Science
    Proceedings.,10th International Conference on Distributed Computing Systems
  • 1990
A model of shared global state, called mobile memory, which is provided by Tarmac, is discussed, which can be viewed as a block of memory that can be directly accessed by machine instructions and as a logical entity with a globally unique name that may be efficiently located, copied, and moved.

Distributed execution of functional programs using serial combinators

The authors describe a program transformation technique based on serial combinators that offers in some sense just the right granularity for this style of computing, and that can be fine-tuned for particular multiprocessor architectures.

Conception, evolution, and application of functional programming languages

The foundations of functional programming languages are examined from both historical and technical perspectives, and current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.

On the suitability of Ada multitasking for expressing parallel algorithms

The multitasking facilities of Ada are shown to lack an essential property necessary to support parallel algorithms: the ability to express parallel evaluation and distribution of parameters to the respective tasks.

Distributed programming with shared data

  • H. BalA. Tanenbaum
  • Computer Science
    Proceedings. 1988 International Conference on Computer Languages
  • 1988
The authors discuss how automatic replication (initiated by the run-time system) can be used as a basis for a model, called the shared data-object model, whose semantics are similar to the shared variable model.

Parallel programming in a virtual object space

Sloop is a parallel language and environment that employs an object-oriented model for explicit parallel programming of MIMD multiprocessors that uses object relocation heuristics and coroutine scheduling to attain high performance.

The Amber system: parallel programming on a network of multiprocessors

A programming system called Amber that permits a single application program to use a homogeneous network of computers in a uniform way, making the network appear to the application as an integrated multiprocessor, shows that support for loosely-coupled multiprocessioning can be efficiently realized using an object-based programming model.

Concepts and Notations for Concurrent Programming

This paper identifies the major concepts and describes some of the more important language notations for writing concurrent programs and three general classes of concurrent programming languages are identified and compared.

Programming languages for distributed computing systems

This paper gives the view of what a distributed system is, and describes the three main characteristics that distinguish distributed programming languages from traditional sequential languages, namely, how they deal with parallelism, communication, and partial failures.