This practical, "how-to" book focuses on the use of concurrency to implement naturally concurrent applications, and presents three extended examples using CML for practical systems programming.
An execution model for supporting programs that use pointer-based dynamic data structures is described that uses a simple mechanism for migrating a thread of control based on the layout of heap-allocated data and introduces parallelism using a technique based on futures and lazy task creation.
This paper presents the port of the NESL implementation to work on GPUs and provides empirical evidence that nested data-parallelism (NDP) on GPUs significantly outperforms CPU-based implementations and matches or beats newer GPU languages that support only flat parallelism.
This dissertation presents an approach to concurrent language design that provides a new form of linguistic support for constructing concurrent applications that treats synchronous operations as first-class values in a way that is analogous to the treatment of functions in languages such as ML.
A lightweight mechanism for specifying and reusing member-level structure in Java programs by introducing a hybrid structural/nominal type system that extends Java's type system.
This paper relies on a rich ML-style module system to provide features such as visibility control and parameterization, while providing a minimal class mechanism that includes only those features needed to support inheritance.
This work extends the earlier work on synchronous exceptions to Haskell to support asynchronous exceptions, and introduces scoped combinators for blocking and unblocking asynchronous interrupts, along with a somewhat surprising semantics for operations that can suspend.
This paper presents Manticore, a language for building parallel applications on commodity multicore hardware including a diverse collection of parallel constructs for different granularities of work, and focuses on the implicitly-threaded parallel constructs in the high-level functional language.
This paper reexamine regular-expression derivatives and reports on the experiences in the context of two different functional-language implementations, showing that the derivatives approach leads to smaller state machines than the traditional algorithm given by McNaughton and Yamada.