Arrows for Parallel Computation
@article{Oberknig2018ArrowsFP, title={Arrows for Parallel Computation}, author={Martin Oberk{\"o}nig and Oleg Lobachev and Philip W. Trinder}, journal={ArXiv}, year={2018}, volume={abs/1801.02216} }
Arrows are a general interface for computation and an alternative to Monads for API design. In contrast to Monad-based parallelism, we explore the use of Arrows for specifying generalised parallelism. Specifically, we define an Arrow-based language and implement it using multiple parallel Haskells.
As each parallel computation is an Arrow, such parallel Arrows (PArrows) can be readily composed and transformed as such. To allow for more sophisticated communication schemes between computation…
Figures and Tables from this paper
References
SHOWING 1-10 OF 104 REFERENCES
A monad for deterministic parallelism
- Computer ScienceHaskell '11
- 2011
A complete work-stealing scheduler implemented as a Haskell library is presented, and it is shown that it performs at least as well as the existing parallel programming models in Haskell.
Efficient Parallel Programming with Algorithmic Skeletons
- Computer ScienceEuro-Par, Vol. I
- 1996
A new approach to programming with skeletons is presented, which integrates the skeletons into an imperative host language enhanced with higher-order functions and currying, as well as with a polymorphic type system, to obtain a high-level programming language which can be implemented very efficiently.
Implementation and evaluation of algorithmic skeletons: parallelisation of computer algebra algorithms
- Computer Science
- 2011
This thesis presents design and implementation approaches for the parallel algorithms of computer algebra, and presents a parallel map+reduce skeleton scheme that allows us to combine the usual parallel map skeletons, like parMap, farm, workpool, with a premature termination property.
Algorithm + strategy = parallelism
- Computer ScienceJournal of Functional Programming
- 1998
Evaluation strategies are introduced: lazy higher-order functions that control the parallel evaluation of non-strict functional languages that can be used to model a wide range of commonly used programming paradigms, including divide-and-conquer parallelism, pipeline Parallelism, producer/ consumer parallelism and data-oriented parallelism.
HPorter: Using Arrows to Compose Parallel Processes
- Computer SciencePADL
- 2007
HPorter is a DSL embedded in Haskell for composing processes running on a parallel computer that permits reconfiguration of these tightly-coupled processes at any time, thus providing a degree of dynamism that is critical in many applications.
Eden - Parallel Functional Programming with Haskell
- Computer ScienceCEFP
- 2011
This tutorial gives an up-to-date introduction into Eden's programming methodology based on algorithmic skeletons, its language constructs, and its layered implementation on top of the Glasgow Haskell compiler.
Parallel functional programming in Eden
- Computer ScienceJ. Funct. Program.
- 2005
The paper gives a comprehensive description of Eden, its semantics, its skeleton-based programming methodology – which is applied in three case studies – its implementation and performance, and points at many additional results that have been achieved in the context of the Eden project.
A Skeleton Library
- Computer ScienceEuro-Par
- 2002
The idea is to offer typical parallel programming patterns as polymorphic higher-order functions which are efficiently implemented in parallel as a library which can easily be used by e.g. C and C++ programmers.
A new notation for arrows
- Computer ScienceICFP '01
- 2001
This paper defines a simple extension to the functional language Haskell that makes new notions of computation more convenient to use, and is extensible, in the sense that new combining forms can be defined as expressions in the host language.
Parallel and Concurrent Programming in Haskell: Techniques for Multicore and Multithreaded Programming
- Computer Science
- 2013
This hands-on book shows you how to use the language's many APIs and frameworks for writing both parallel and concurrent programs, and how parallelism exploits multicore processors to speed up computation-heavy programs.