GHC/Data Parallel Haskell

From HaskellWiki
< GHC
Revision as of 09:06, 25 July 2007 by Chak (talk | contribs) (→‎Download)
Jump to navigation Jump to search

Data Parallel Haskell

Data Parallel Haskell is the codename for an extension to the Glasgow Haskell Compiler and its libraries to support nested data parallelism with a focus to utilise multi-core CPUs. Nested data parallelism extends the programming model of flat data parallelism, as known from parallel Fortran dialects, to irregular parallel computations (such as divide-and-conquer algorithms) and irregular data structures (such as sparse matrices and tree structures). An introduction to nested data parallelism in Haskell, including some examples, can be found in the paper Nepal -- Nested Data-Parallelism in Haskell.

Overview

The design, some implementation details, and first results of Data Parallel Haskell are described in the paper Data Parallel Haskell: a status report. The same topic is covered in the slides for the two talks Nested data parallelism in Haskell and Compiling nested data parallelism by program transformation.

Data Parallel Haskell essentially extends GHC in two ways:

  1. by a concurrent, high-performance library of strict, segmented arrays, which we call package ndp and
  2. by syntactic sugar for nested parallel arrays and array comprehensions, including a program transformation, called vectorisation, which maps nested array parallelism to the API provided by package ndp.

Both package ndp and the syntactic sugar are already usable, but as we have yet to implement vectorisation, they currently cannot be used together. To get an idea of code using array comprehensions, consider the following definition of the dot product of two vectors:

dotp :: Num a => [:a:] -> [:a:] -> a
dotp xs ys = sumP [:x * y | x <- xs | y <- ys:]

This code uses an array variant of parallel list comprehensions, but should otherwise be self-explanatory to any Haskell programmer.

In contrast, package ndp requires us to manually desugar the function definition to

dotp :: (Num a, UA a) => UArr a -> UArr a -> a
dotp xs ys = sumU (zipWithU (*) xs ys)

This is still pretty straight forward, combinator-based code. However, desugaring becomes more burdensome when algorithms use nested arrays.

Until we have added vectorisation to GHC, we recommend to prototype parallel algorithms using nested arrays and array comprehensions, and then, to manually desugar them to use package ndp for performance. For more examples of programs using nested arrays and array comprehensions, refer to convenience without the speed, and for more examples of programs using package ndp, refer to speed without the convenience.

Download

As we are still missing some parts in the implementation of Data Parallel Haskell (most importantly the vectorisation transformation), you need to choose between the following two options:

Convenience without the speed
Nice surface syntax for parallel arrays and array comprehensions (very similar to that for Haskell lists) is available in all GHC releases of the 6.6 series. However, we strongly recommend 6.6.1 (or later) due to some bug fixes. To use parallel arrays, specify the option -fparr and import GHC.PArr. This option is nice to implement and test parallel algorithms, but the code will be executed purely sequentially with limited performance. Further details including example programs are at convenience without the speed.
Speed without the convenience
Parallel high-performance arrays that use aggressive fusion and transparently utilise multicore and SMP architectures are available in the library package ndp. In contrast to the previous option, there is no special syntactic sugar for array types and array comprehensions, and only flat and segmented arrays of basic types and pairs are available. The library is only provided in source form and needs to be compiled in a GHC build tree. However, this is pretty easy to do. The details including example programs are at speed without the convenience.

We are currently working to integrate these two components to finally provide convenience and speed in one.

Disclaimer: Data Parallel Haskell is very much work in progress. Some components are already usable, and we explain here how to use them. However, please be aware that APIs are still in flux and functionality may change during development.

Further information

For further reading, refer to this collection of background papers, and pointers to other people's work. If you are really curious and like to know implementation details and the internals of the Data Parallel Haskell project, it is all on the GHC developer wiki on the pages covering data parallelism and associated types.