Applications and libraries/Concurrency and parallelism

From HaskellWiki
Jump to navigation Jump to search

Concurrent and Parallel Programming

Haskell has been designed for parallel and concurrent programming since its inception. In particular, Haskell's purity greatly simplifies reasoning about parallel programs. This page lists libraries and extensions for programming concurrent and parallel applications in Haskell. See also the parallel portal for research papers, tutorials and on parallel and concurrent Haskell.

Parallelism

Parallel Strategies

Strategies provide a high-level compositional API for parallel programming.

Monad-par

Monad-par
An alternative parallel programming API to that provided by the parallel package. The Par monad allows the simple description of parallel computations, and can be used to add parallelism to pure Haskell code. The basic API is straightforward: the monad supports forking and simple communication in terms of IVars.

Data Parallel Haskell

Data Parallel Haskell
Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.

Research tools

Feedback-directed implicit parallelism
Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity (FDIP paper)
GpH: Glasgow Parallel Haskell
A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.

Concurrency

Concurrent Haskell
GHC has supported concurrency with lightweight threads for more than a decade, and it is very fast. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.
Software Transactional Memory
GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.

Actors

Actors with multi-headed receive clauses
Actor-based concurrency for Haskell
CHP: Communicating Haskell Processes
CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.

Helper tools

Wrapped Concurrency
A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.

Distributed programming

MPI

hMPI
hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.
Haskell-MPI
Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.

Distributed Haskell

GdH: Glasgow Distributed Haskell
GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.
Eden
Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.

Research tools

HCPN: Haskell-Coloured Petri Nets
Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.