Linda and Friends

@article{Ahuja1986LindaAF,
  title={Linda and Friends},
  author={Sudhir R. Ahuja and Nicholas Carriero and David Gelernter},
  journal={Computer},
  year={1986},
  volume={19},
  pages={26-34}
}
Linda consists of a few simple operators designed to support and simplify the construction of explicitly-parallel programs. Linda has been implemented on ATandT Bell Labs' S/Net multicomputer and, in a preliminary way, on an Ethernet-based MicroVAX network and an Intel iPSC hypercube. Parallel programming is often described as being fundamentally harder than conventional, sequential programming, but in our experience (limited so far, but growing) it isn't. Parallel programming in Linda is… Expand
Artificial Intelligence and Linda in Context
Linda consists of ii few simple operations that embody the tuple space m ldel of parallel programming. Adding these tuple-space operations to a base language yields a paral.lel programming dialect.Expand
The Linda Machine
TLDR
The Linda Machine project is described, which describes the machine’s special-purpose communication network and its associated protocols, the design of the Linda coprocessor and the way its interaction with the network supports global access to tuple space. Expand
Linda meets Unix
TLDR
The limitations of the shared-memory and distributed-memory models for explicit parallel programming are discussed and a new model, the Linda parallel communication paradigm which was designed specifically for parallel programming, is examined, and a specific instance, QIX, is presented. Expand
p4-Linda: a portable implementation of Linda
TLDR
The authors provide two implementations of Linda in an attempt to support a single high-level programming model on top of the existing paradigms in order to provide a consistent semantics regardless of the underlying model. Expand
Implementing Linda for distributed and parallel processing
In a recent paper [17], we described experiments using the VAX LINDA system. VAX LINDA allows a single application program to utilize many machines on a network simultaneously. ApplicationsExpand
Linda and parallel computing-running efficiently on parallel time
Linda, a set of commands that can be added to an arbitrary programming language N to form the N-Linda parallel programming language, is described. Linda enables users to use parallelism efficientlyExpand
Calypso : An Environment for Reliable Distributed Parallel Processing y
The importance of adapting networks of workstations for use as parallel processing platforms is well established. However, current solutions do not always satisfactorily address important issues thatExpand
Supporting Fault-Tolerant Parallel Programming in Linda 1
Linda is a language for programming parallel applications whose most notable feature is a distributed shared memory called tuple space. While suitable for a wide variety of programs, one shortcomingExpand
Supporting Fault-Tolerant Parallel Programming in Linda
TLDR
FT-Linda is described, a version of Linda that addresses this problem by providing two major enhancements that facilitate the writing of fault-tolerant applications: stable tuple spaces and atomic execution of tuple space operations. Expand
Applications experience with Linda
TLDR
Three experiments using C-Linda to write parallel codes demonstrate Linda's flexibility and bolsters the claim that Linda can bridge the gap between the growing collection of parallel hardware and users eager to exploit parallelism. Expand
...
1
2
3
4
5
...

References

SHOWING 1-8 OF 8 REFERENCES
The S/Net's Linda kernel
TLDR
The implementation suggests that Linda's unusual shared-memory-like communication primitives can be made to run well in the absence of physically shared memory; the simplicity of the language and of the implementation's logical structure suggest that similar Linda implementations might readily be constructed on related architectures. Expand
The IBM Research Parallel Processor (RP3): Introduction and Architecture
  • Proc. 1985Intl 7Conf. Parallel Processing
  • 1985
MSplice: A Multiprocessor-based Circuit Simulator
  • Proc. 1984 Int'l Conf. Parallel Processing
  • 1984
MSplice: A Multiprocessor-based Circuit Simulator," Proc
  • 1984 Int'l Conf. Parallel Processing, Aug.
  • 1984
Symmetric Programming Languages
  • Symmetric Programming Languages
  • 1984
David Gelermter's biography and photo appear following the Guest Editor's Introduction
  • David Gelermter's biography and photo appear following the Guest Editor's Introduction
He received a BS from Brown in 1980 and an MS in computer science from SUNY at Stony Brook in 1983. Distributed programming languages and operating systems are his research interests
  • He received a BS from Brown in 1980 and an MS in computer science from SUNY at Stony Brook in 1983. Distributed programming languages and operating systems are his research interests
Readers may write to Gelernter at the Dept. of Computer Science, PO Box 2158, Yale Station
  • Readers may write to Gelernter at the Dept. of Computer Science, PO Box 2158, Yale Station