Difference between revisions of "Applications and libraries/Concurrency and parallelism"

From HaskellWiki
Jump to navigation Jump to search
(replace tutorial link with portal)
(attempt to reorganise libraries into parallel/concurrent/distributed)
Line 8: Line 8:
 
parallel and concurrent Haskell.
 
parallel and concurrent Haskell.
   
== SMP Haskell ==
+
== Parallelism ==
   
;[[Parallel]]
+
=== Parallel Strategies ===
:As of GHC 6.5, GHC supports running programs in [http://www.haskell.org/ghc/docs/latest/html/users_guide/using-smp.html parallel] on an SMP or multicore machine, and has been used successfully on machines with up to 48 cores.
 
   
 
Strategies provide a high-level compositional API for parallel programming.
== Concurrency==
 
   
 
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]
;[[Concurrency|Concurrent Haskell]]
 
 
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.
 
 
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]
 
 
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]
 
   
 
=== Data Parallel Haskell ===
;[[WrapConc | Wrapped Concurrency]]
 
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.
 
   
 
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]
== Concurrent channels ==
 
 
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.
   
 
=== Low-level parallelism: par and pseq ===
;Control.Concurrent.Chan: channels allow threads to pass messages to each other
 
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent-Chan.html Documentation]
 
   
 
The Control.Parallel module provides the low-level operations for parallelism on which Strategies are built.
== Software transactional memory ==
 
   
 
* [http://hackage.haskell.org/packages/archive/parallel/latest/doc/html/Control-Parallel.html Control.Parallel Documentation]
;[[Software transactional memory|Software Transactional Memory]]
 
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.
 
* [http://hackage.haskell.org/package/stm Documentation]
 
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions].
 
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.
 
   
 
=== Feedback-directed implicit parallelism ===
== Parallel Strategies ==
 
   
 
[http://research.microsoft.com/~tharris/papers/2007-fdip.pdf Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity]
Strategies provide a high-level compositional API for parallel programming.
 
   
 
=== Glasgow Parallel Haskell ===
* [http://hackage.haskell.org/package/parallel The parallel package on Hackage]
 
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel-Strategies.html Control.Parallel.Strategies Documentation]
 
* Latest paper: [http://research.microsoft.com/apps/pubs/default.aspx?id=138042 Seq no more: Better Strategies for Parallel Haskell]
 
* Original paper: [http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html Algorithm + Strategy = Parallelism]
 
   
  +
''EYK: is this redundant with strategies?''
== Low-level parallelism: par and pseq ==
 
   
 
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]
The Control.Parallel module provides the low-level operations for parallelism on which Strategies are built.
 
 
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.
   
  +
----
* [http://hackage.haskell.org/packages/archive/parallel/3.1.0.1/doc/html/Control-Parallel.html Control.Parallel Documentation]
 
   
== Data Parallel Haskell ==
+
== Concurrency ==
   
 
;[[Concurrency|Concurrent Haskell]]
;[http://haskell.org/haskellwiki/GHC/Data_Parallel_Haskell Data Parallel Haskell]
 
 
:GHC has supported concurrency with lightweight threads for more than a decade, and it [http://shootout.alioth.debian.org/gp4/benchmark.php?test=threadring&lang=all is very fast]. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.
:Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.
 
 
* [http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurrent.html Documentation]
 
* [http://haskell.org/haskellwiki/Concurrency_demos Examples]
   
 
=== Software transactional memory ===
== Communicating Haskell Processes ==
 
   
 
;[[Software transactional memory|Software Transactional Memory]]
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]
 
 
:GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.
 
 
* [http://hackage.haskell.org/package/stm Documentation]
 
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/stm.pdf Composable memory transactions].
 
* The paper [http://research.microsoft.com/en-us/um/people/simonpj/papers/stm/lock-free-flops06.pdf Lock-free data structures using Software Transactional Memory in Haskell] gives further examples of concurrent programming using STM.
   
== Actors ==
+
=== Transactional events ===
 
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]
 
:Actor-based concurrency for Haskell
 
 
== Transactional events ==
 
   
 
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/transactional-events Transactional events, based on Concurrent ML]
 
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/transactional-events Transactional events, based on Concurrent ML]
 
:Transactional events for Haskell.
 
:Transactional events for Haskell.
   
== Unified events and threads ==
+
=== Helper tools ===
   
 
;[[WrapConc | Wrapped Concurrency]]
;User-level events and threads (dead link)
 
 
:A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.
:Ultra lightweight, user level threads for GHC Haskell, layered over epoll. Supports up to 10 million lightweight threads. Experimental.
 
   
  +
=== Actors ===
== Feedback-directed implicit parallelism ==
 
   
 
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/actor Actors with multi-headed receive clauses]
[http://research.microsoft.com/~tharris/papers/2007-fdip.pdf Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity]
 
 
:Actor-based concurrency for Haskell
 
;[http://www.cs.kent.ac.uk/projects/ofa/chp/ CHP: Communicating Haskell Processes]
 
:CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.
   
  +
----
== Parallel Haskell ==
 
   
 
== Distributed programming ==
;[http://www.macs.hw.ac.uk/~dsg/gph/ GpH: Glasgow Parallel Haskell]
 
  +
:A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.
 
  +
=== MPI ===
  +
 
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]
 
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.
  +
 
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]
 
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.
   
== Distributed Haskell ==
+
=== Distributed Haskell ===
   
 
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]
 
;[http://www.macs.hw.ac.uk/~dsg/gdh/ GdH: Glasgow Distributed Haskell]
Line 96: Line 98:
 
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.
 
:Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.
   
== MPI ==
+
=== Ports ===
 
;[http://www.foldr.org/~michaelw/hmpi/ hMPI]
 
:hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.
 
 
;[http://hackage.haskell.org/package/haskell-mpi Haskell-MPI]
 
:Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.
 
 
== Distributed Haskell: Ports ==
 
   
 
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ports-0.4.3.2 The Haskell Ports Library (HPL)]
 
;[http://hackage.haskell.org/cgi-bin/hackage-scripts/package/ports-0.4.3.2 The Haskell Ports Library (HPL)]
 
:Ports are an abstraction for modelling variables whose values evolve over time without the need to resort to mutable variable, such as IORefs. More precisely, a port represents all values that a time-dependent variable successively takes as a stream, where each element of the stream corresponds to a state change - we can also say that a port represents a time series. Moreover, a port supports concurrent construction of the time series, or stream of values.
 
:Ports are an abstraction for modelling variables whose values evolve over time without the need to resort to mutable variable, such as IORefs. More precisely, a port represents all values that a time-dependent variable successively takes as a stream, where each element of the stream corresponds to a state change - we can also say that a port represents a time series. Moreover, a port supports concurrent construction of the time series, or stream of values.
   
== Modelling concurrent and distributed systems ==
+
=== Modelling concurrent and distributed systems ===
   
 
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]
 
;[http://www.cs.kent.ac.uk/~cr3/HCPN/ HCPN: Haskell-Coloured Petri Nets]

Revision as of 07:29, 26 April 2012

Concurrent and Parallel Programming

Haskell has been designed for parallel and concurrent programming since its inception. In particular, Haskell's purity greatly simplifies reasoning about parallel programs. This page lists libraries and extensions for programming concurrent and parallel applications in Haskell. See also the parallel portal for research papers, tutorials and on parallel and concurrent Haskell.

Parallelism

Parallel Strategies

Strategies provide a high-level compositional API for parallel programming.

Data Parallel Haskell

Data Parallel Haskell
Implicitly parallel, high performance (nested) arrays, supporting large multicore programming.

Low-level parallelism: par and pseq

The Control.Parallel module provides the low-level operations for parallelism on which Strategies are built.

Feedback-directed implicit parallelism

Implicit parallelism in Haskell, and a feedback-directed mechanism to increase its granularity

Glasgow Parallel Haskell

EYK: is this redundant with strategies?

GpH: Glasgow Parallel Haskell
A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.

Concurrency

Concurrent Haskell
GHC has supported concurrency with lightweight threads for more than a decade, and it is very fast. Threads in Haskell are preemptively scheduled and support everything you would normally expect from threads, including blocking I/O and foreign calls.

Software transactional memory

Software Transactional Memory
GHC supports a sophisticated version of software transactional memory. Software Transactional Memory (STM) is a new way to coordinate concurrent threads.

Transactional events

Transactional events, based on Concurrent ML
Transactional events for Haskell.

Helper tools

Wrapped Concurrency
A wrapper around Control.Concurrency and Control.Exception that provides versions of forkIO that have more guarantees.

Actors

Actors with multi-headed receive clauses
Actor-based concurrency for Haskell
CHP: Communicating Haskell Processes
CHP is built on the ideas of CSP (Communicating Sequential Processes), featuring encapsulated parallel processes (no shared data!) communicating over synchronous channels. This is a very composable mode that also allows choice on communications, so that a process may offer to either read on one channel or write on another, but will only take the first that is available.

Distributed programming

MPI

hMPI
hMPI is an acronym for HaskellMPI. It is a Haskell binding conforming to MPI (Message Passing Interface) standard 1.1/1.2. The programmer is in full control over the communication between the nodes of a cluster.
Haskell-MPI
Haskell-MPI provides a Haskell interface to MPI, built on top of the foreign function interface. It is notionally a descendant of hMPI, but is mostly a rewrite.

Distributed Haskell

GdH: Glasgow Distributed Haskell
GdH supports distributed stateful interactions on multiple locations. It is a conservative extension of both Concurrent Haskell and GpH, enabling the distribution of the stateful IO threads of the former on the multiple locations of the latter. The programming model includes forking stateful threads on remote locations, explicit communication over channels, and distributed exception handling.
Mobile Haskell (mHaskell) (dead link)
Mobile Haskell supports both strong and weak mobility of computations across open networks. The mobility primitives are higher-order polymorphic channels. Mid-level abstractions like remote evaluation, analogous to Java RMI, are readily constructed. High-level mobility skeletons like mobile map and mobile fold encapsulate common patterns of mobile computation.
Eden
Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.

Ports

The Haskell Ports Library (HPL)
Ports are an abstraction for modelling variables whose values evolve over time without the need to resort to mutable variable, such as IORefs. More precisely, a port represents all values that a time-dependent variable successively takes as a stream, where each element of the stream corresponds to a state change - we can also say that a port represents a time series. Moreover, a port supports concurrent construction of the time series, or stream of values.

Modelling concurrent and distributed systems

HCPN: Haskell-Coloured Petri Nets
Haskell-Coloured Petri Nets (HCPN) are an instance of high-level Petri Nets, in which anonymous tokens are replaced by Haskell data objects (and transitions can operate on that data, in addition to moving it around). This gives us a hybrid graphical/textual modelling formalism for Haskell, especially suited for modelling concurrent and distributed systems.