Difference between revisions of "GHC/Data Parallel Haskell"

From HaskellWiki
< GHC
Jump to navigation Jump to search
(Outline of the architecture of the NDP implementation)
m
Line 7: Line 7:
 
== Implementation of nested data parallelism in two stages ==
 
== Implementation of nested data parallelism in two stages ==
   
As Data Parallel Haskell is not fully implemented at this stage, use of the existing components requires some knowledge of the structure of the implementation. It essentially consists of two parts:
+
As Data Parallel Haskell is not fully implemented at this stage, use of the existing components requires some knowledge of the structure of the implementation, which essentially consists of two parts:
   
# A concurrent, high-performance library of strict, segmented arrays. In contrast to [http://haskell.org/onlinereport/array.html Haskell 98 arrays], these arrays are ''strict'' in that when one element of an array is evaluated, all of them are - we also call this a ''parallel evaluation semantics''. Moreover, all operations are available for plain arrays and ''segmented arrays'', where e.g. a sum of a segmented array of numbers computes one sum for each segment. The library has a purely functional interface, but internally uses monadic low-level array operations, array fusion, and a many standard GHC optimisations to produce highly optimised code. Finally, the library uses [[GHC/Concurrency|GHC's SMP concurrency]] to parallelise array processing on hardware with multiple processing elements.
+
# A concurrent, high-performance library of strict, segmented arrays. In contrast to [http://haskell.org/onlinereport/array.html Haskell 98 arrays], these arrays are ''strict'' in that when one element of an array is evaluated, all of them are - we also call this a ''parallel evaluation semantics''. Moreover, all operations come in two flavours: one for plain, flat arrays and one for ''segmented arrays'', which are a particular representation of arrays with one level of nesting. For example, a sum of a segmented array of numbers computes one sum for each segment. The library has a purely functional interface, but internally uses monadic low-level array operations, array fusion, and a many standard GHC optimisations to produce highly optimised code. Finally, the library uses [[GHC/Concurrency|GHC's SMP concurrency]] to parallelise array processing on hardware with multiple processing elements.
# A vectorising program transformation (in the compiler) that maps nested array code (of arbitrary nesting depth) to code using segmented arrays. Nested parallel arrays are as convenient to use as finite, eagerly evaluated nested lists. Code ''vectorisation'' transforms a nested parallel array program such that it operates on the segmented arrays described before. The resulting code would be tedious to write manually, but is much more efficient than a direct implementation of nested arrays. (Segmented arrays correspond to arrays with a nesting depth of two; so, vectorisation collapses arbitrary nesting to one level of nesting.)
+
# A vectorising program transformation (in the compiler) that maps nested array code (of arbitrary nesting depth) to code using segmented arrays. Nested parallel arrays are as convenient to use as finite, eagerly evaluated nested lists. Code ''vectorisation'' transforms a nested parallel array program such that it operates on the segmented arrays described before. The resulting code would be tedious to write manually, but is much more efficient than a direct implementation of nested arrays. (As segmented arrays correspond to arrays with one level of nesting, vectorisation collapses arbitrary nesting to one level of nesting.)
   
 
So far, we have implemented large parts of the concurrent array library (Part 1), but not the vectorisation transformation (Part 2). Hence, users must choose between the following two options: (1) The convenient programming model of arbirarily nested, irregular array computations with a reasonably efficient, but not highly optimised and only purely sequential implementation. (2) An aggresively optimising, concurrent array library with a less expressive API of segmented (instead of nested) arrays, but which is still purely functional.
 
So far, we have implemented large parts of the concurrent array library (Part 1), but not the vectorisation transformation (Part 2). Hence, users must choose between the following two options: (1) The convenient programming model of arbirarily nested, irregular array computations with a reasonably efficient, but not highly optimised and only purely sequential implementation. (2) An aggresively optimising, concurrent array library with a less expressive API of segmented (instead of nested) arrays, but which is still purely functional.

Revision as of 17:47, 16 October 2006

Data Parallel Haskell

Data Parallel Haskell is the codename for an extension to the Glasgow Haskell Compiler and its libraries to support nested data parallelism with a focus to utilise multi-core CPUs. Nested data parallelism extends the programming model of flat data parallelism, as known from parallel Fortran dialects, to irregular parallel computations (such as divide-and-conquer algorithms) and irregular data structures (such as sparse matrices and tree structures). An introduction to nested data parallelism in Haskell, including some examples, can be found in the paper Nepal -- Nested Data-Parallelism in Haskell.

Data Parallel Haskell is very much work in progress. Some components are already usable, and the rest of this page describes how to use them. However, please be aware that APIs are still in flux and some functionality may be broken at times during development.

Implementation of nested data parallelism in two stages

As Data Parallel Haskell is not fully implemented at this stage, use of the existing components requires some knowledge of the structure of the implementation, which essentially consists of two parts:

  1. A concurrent, high-performance library of strict, segmented arrays. In contrast to Haskell 98 arrays, these arrays are strict in that when one element of an array is evaluated, all of them are - we also call this a parallel evaluation semantics. Moreover, all operations come in two flavours: one for plain, flat arrays and one for segmented arrays, which are a particular representation of arrays with one level of nesting. For example, a sum of a segmented array of numbers computes one sum for each segment. The library has a purely functional interface, but internally uses monadic low-level array operations, array fusion, and a many standard GHC optimisations to produce highly optimised code. Finally, the library uses GHC's SMP concurrency to parallelise array processing on hardware with multiple processing elements.
  2. A vectorising program transformation (in the compiler) that maps nested array code (of arbitrary nesting depth) to code using segmented arrays. Nested parallel arrays are as convenient to use as finite, eagerly evaluated nested lists. Code vectorisation transforms a nested parallel array program such that it operates on the segmented arrays described before. The resulting code would be tedious to write manually, but is much more efficient than a direct implementation of nested arrays. (As segmented arrays correspond to arrays with one level of nesting, vectorisation collapses arbitrary nesting to one level of nesting.)

So far, we have implemented large parts of the concurrent array library (Part 1), but not the vectorisation transformation (Part 2). Hence, users must choose between the following two options: (1) The convenient programming model of arbirarily nested, irregular array computations with a reasonably efficient, but not highly optimised and only purely sequential implementation. (2) An aggresively optimising, concurrent array library with a less expressive API of segmented (instead of nested) arrays, but which is still purely functional.