# Numeric Haskell: A Repa Tutorial

(Difference between revisions)
 Revision as of 15:24, 4 October 2012 (edit)m (Fixed incorrect indices)← Previous diff Revision as of 09:17, 5 October 2012 (edit) (undo) (Decreased header indentation)Next diff → Line 30: Line 30: * [http://www.cse.unsw.edu.au/~benl/papers/stencil/stencil-icfp2011-sub.pdf Efﬁcient Parallel Stencil Convolution in Haskell] * [http://www.cse.unsw.edu.au/~benl/papers/stencil/stencil-icfp2011-sub.pdf Efﬁcient Parallel Stencil Convolution in Haskell] - == Importing the library == + = Importing the library = Download the repa package: Download the repa package: Line 55: Line 55: * [http://hackage.haskell.org/package/repa-examples repa-examples] * [http://hackage.haskell.org/package/repa-examples repa-examples] - == Index types and shapes == + = Index types and shapes = Before we can get started manipulating arrays, we need a grasp of repa's notion of array shape. Much like the classic 'array' library in Haskell, repa-based arrays are parameterized via a type which determines the dimension of the array, and the type of its index. However, while classic arrays take tuples to represent multiple dimensions, Repa arrays use a [http://hackage.haskell.org/packages/archive/repa/2.0.0.3/doc/html/Data-Array-Repa-Shape.html#t:Shape richer type language] for describing multi-dimensional array indices and shapes (technically, a ''heterogeneous snoc list''). Before we can get started manipulating arrays, we need a grasp of repa's notion of array shape. Much like the classic 'array' library in Haskell, repa-based arrays are parameterized via a type which determines the dimension of the array, and the type of its index. However, while classic arrays take tuples to represent multiple dimensions, Repa arrays use a [http://hackage.haskell.org/packages/archive/repa/2.0.0.3/doc/html/Data-Array-Repa-Shape.html#t:Shape richer type language] for describing multi-dimensional array indices and shapes (technically, a ''heterogeneous snoc list''). Line 98: Line 98: over arrays with different shape. over arrays with different shape. - === Building shapes === + == Building shapes == To build values of shape type, we can use the Z and :. constructors. Open the ghci and import Repa: To build values of shape type, we can use the Z and :. constructors. Open the ghci and import Repa: Line 132: Line 132: Additional convenience types for selecting particular parts of a shape are also provided (All, Any, Slice etc.) which are covered later in the tutorial. Additional convenience types for selecting particular parts of a shape are also provided (All, Any, Slice etc.) which are covered later in the tutorial. - === Working with shapes === + == Working with shapes == That one key operation, extent, gives us many attributes of an array: That one key operation, extent, gives us many attributes of an array: Line 169: Line 169: - == Generating arrays == + = Generating arrays = New repa arrays ("arrays" from here on) can be generated in many ways, and we always begin by importing the Data.Array.Repa module: New repa arrays ("arrays" from here on) can be generated in many ways, and we always begin by importing the Data.Array.Repa module: Line 180: Line 180: Loading package base ... linking ... done. Loading package base ... linking ... done. Prelude > :m + Data.Array.Repa Prelude > :m + Data.Array.Repa - - They may be constructed from lists, for example. Here is a one dimensional array of length 10, here, given the shape (Z :. 10): They may be constructed from lists, for example. Here is a one dimensional array of length 10, here, given the shape (Z :. 10): Line 235: Line 233: - === Building arrays from vectors === + == Building arrays from vectors == It is also possible to build arrays from unboxed vectors, from the 'vector' package: It is also possible to build arrays from unboxed vectors, from the 'vector' package: Line 265: Line 263: to create a 3x3 array. to create a 3x3 array. - === Generating random arrays === + == Generating random arrays == The [http://hackage.haskell.org/package/repa-algorithms repa-algorithms] package lets us generate new arrays with random data: The [http://hackage.haskell.org/package/repa-algorithms repa-algorithms] package lets us generate new arrays with random data: Line 276: Line 274: - === Reading arrays from files === + == Reading arrays from files == Using the [http://hackage.haskell.org/package/repa-io repa-io] package, arrays may be written and read from files in a number of formats: Using the [http://hackage.haskell.org/package/repa-io repa-io] package, arrays may be written and read from files in a number of formats: Line 315: Line 313: xx :: Array U DIM2 Double xx :: Array U DIM2 Double - To process [http://en.wikipedia.org/wiki/BMP_file_format .bmp files], use [http://hackage.haskell.org/packages/archive/repa-io/2.0.0.3/doc/html/Data-Array-Repa-IO-BMP.html Data.Array.Repa.IO.BMP], as follows (currently reading only works for 24 bit .bmp): To process [http://en.wikipedia.org/wiki/BMP_file_format .bmp files], use [http://hackage.haskell.org/packages/archive/repa-io/2.0.0.3/doc/html/Data-Array-Repa-IO-BMP.html Data.Array.Repa.IO.BMP], as follows (currently reading only works for 24 bit .bmp): Line 331: Line 328: For image IO in many, many formats, use the [http://hackage.haskell.org/package/repa-devil repa-devil] library. For image IO in many, many formats, use the [http://hackage.haskell.org/package/repa-devil repa-devil] library. - === Copying arrays from pointers === + == Copying arrays from pointers == You can also generate new repa arrays by copying data from a pointer, using the [http://hackage.haskell.org/package/repa-bytestring repa-bytestring] package. Here is an example, using copyFromPtrWord8: You can also generate new repa arrays by copying data from a pointer, using the [http://hackage.haskell.org/package/repa-bytestring repa-bytestring] package. Here is an example, using copyFromPtrWord8: Line 372: Line 369: http://i.imgur.com/o0Cv2.png http://i.imgur.com/o0Cv2.png - == Indexing arrays == + = Indexing arrays = To access elements in repa arrays, you provide an array and a shape, to access the element: To access elements in repa arrays, you provide an array and a shape, to access the element: Line 422: Line 419: - == Operations on arrays == + = Operations on arrays = Besides indexing, there are many regular, list-like operations on arrays. Since many of the names parallel those in the Prelude, we import Repa qualified: Besides indexing, there are many regular, list-like operations on arrays. Since many of the names parallel those in the Prelude, we import Repa qualified: Line 428: Line 425: Repa> import qualified Data.Array.Repa as Repa Repa> import qualified Data.Array.Repa as Repa - === Maps, zips, filters and folds === + == Maps, zips, filters and folds == We can map over multi-dimensional arrays: We can map over multi-dimensional arrays: Line 505: Line 502: ((Z :. 3) :. 3) :. 3 ((Z :. 3) :. 3) :. 3 - === Mapping, with indices === + == Mapping, with indices == A very powerful operator is traverse, a parallel array traversal which also supplies the current index: A very powerful operator is traverse, a parallel array traversal which also supplies the current index: Line 557: Line 554: The documentation on [http://hackage.haskell.org/packages/archive/repa/2.0.2.1/doc/html/Data-Array-Repa.html#g:7 traverse] provides further information. The documentation on [http://hackage.haskell.org/packages/archive/repa/2.0.2.1/doc/html/Data-Array-Repa.html#g:7 traverse] provides further information. - === Numeric operations: negation, addition, subtraction, multiplication === + == Numeric operations: negation, addition, subtraction, multiplication == Repa arrays are instances of the Num. This means that Repa arrays are instances of the Num. This means that Line 579: Line 576: [1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0,100.0] [1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0,100.0] - == Changing the shape of an array == + = Changing the shape of an array = One of the main advantages of repa-style arrays over other arrays in One of the main advantages of repa-style arrays over other arrays in Line 629: Line 626: arrays into larger ones. arrays into larger ones. - == Examples == + = Examples = Following are some examples of useful functions that exercise the API. Following are some examples of useful functions that exercise the API. - === Example: Rotating an image with backpermute === + == Example: Rotating an image with backpermute == Flip an image upside down: Flip an image upside down: Line 667: Line 664: http://i.imgur.com/YsGA8.jpg http://i.imgur.com/YsGA8.jpg - === Example: matrix-matrix multiplication === + == Example: matrix-matrix multiplication == A more advanced example from the Repa paper: matrix-matrix multiplication: the result of A more advanced example from the Repa paper: matrix-matrix multiplication: the result of Line 713: Line 710: [http://hackage.haskell.org/package/repa-algorithms repa-algorithms] package. [http://hackage.haskell.org/package/repa-algorithms repa-algorithms] package. - === Example: parallel image desaturation === + == Example: parallel image desaturation == To convert an image from color to greyscale, we can use the luminosity method to average RGB pixels into a common grey value, where the average is weighted for human perception of green. To convert an image from color to greyscale, we can use the luminosity method to average RGB pixels into a common grey value, where the average is weighted for human perception of green. Line 770: Line 767: http://i.imgur.com/REhA5.png http://i.imgur.com/REhA5.png - == Optimising Repa programs == + = Optimising Repa programs = - === Fusion, and why you need it === + == Fusion, and why you need it == Repa depends critically on array fusion to achieve fast code. Fusion is a fancy name for the combination of inlining and code transformations performed by GHC when it compiles your program. The fusion process merges the array filling loops defined in the Repa library, with the "worker" functions that you write in your own module. If the fusion process fails, then the resulting program will be much slower than it needs to be, often 10x slower an equivalent program using plain Haskell lists. On the other hand, provided fusion works, the resulting code will run as fast as an equivalent cleanly written C program. Making fusion work is not hard once you understand what's going on. Repa depends critically on array fusion to achieve fast code. Fusion is a fancy name for the combination of inlining and code transformations performed by GHC when it compiles your program. The fusion process merges the array filling loops defined in the Repa library, with the "worker" functions that you write in your own module. If the fusion process fails, then the resulting program will be much slower than it needs to be, often 10x slower an equivalent program using plain Haskell lists. On the other hand, provided fusion works, the resulting code will run as fast as an equivalent cleanly written C program. Making fusion work is not hard once you understand what's going on. - === The force function has the loops === + == The force function has the loops == Suppose we have the following binding: Suppose we have the following binding: Line 785: Line 782: Importantly, the code that does the allocation, iteration and update is defined as part of the force function. This forcing code has been written to break up the result into several chunks, and evaluate each chunk with a different thread. This is what makes your code run in parallel. If you do ''not'' use force then your code will be slow and ''not'' run in parallel. Importantly, the code that does the allocation, iteration and update is defined as part of the force function. This forcing code has been written to break up the result into several chunks, and evaluate each chunk with a different thread. This is what makes your code run in parallel. If you do ''not'' use force then your code will be slow and ''not'' run in parallel. - === Delayed and Manifest arrays === + == Delayed and Manifest arrays == In the example from the previous section, think of the R.map (\x -> x + 1) arr expression as a ''specification'' for a new array. In the library, this specification is referred to as a ''delayed'' array. A delayed array is represented as a function that takes an array index, and computes the value of the element at that index. In the example from the previous section, think of the R.map (\x -> x + 1) arr expression as a ''specification'' for a new array. In the library, this specification is referred to as a ''delayed'' array. A delayed array is represented as a function that takes an array index, and computes the value of the element at that index. Line 792: Line 789: All Repa array operators will accept both delayed and manifest arrays. However, if you index into a delayed array without forcing it first, then each indexing operation costs a function call. It also ''recomputes'' the value of the array element at that index. All Repa array operators will accept both delayed and manifest arrays. However, if you index into a delayed array without forcing it first, then each indexing operation costs a function call. It also ''recomputes'' the value of the array element at that index. - === Shells and Springs === + == Shells and Springs == Here is another way to think about Repa's approach to array fusion. Suppose we write the following binding: Here is another way to think about Repa's approach to array fusion. Suppose we write the following binding: Line 801: Line 798: When GHC compiles this example, the two worker functions are fused into a fresh unfolding of the parallel loop defined in the code for R.force. Imagine holding R.force in your left hand, and squashing the calls to R.map into it, like a spring. Doing this breaks all the shells, and you end up with the worker functions fused into an unfolding of R.force. When GHC compiles this example, the two worker functions are fused into a fresh unfolding of the parallel loop defined in the code for R.force. Imagine holding R.force in your left hand, and squashing the calls to R.map into it, like a spring. Doing this breaks all the shells, and you end up with the worker functions fused into an unfolding of R.force. - === INLINE worker functions === + == INLINE worker functions == Consider the following example: Consider the following example: Line 820: Line 817: arr' = R.force $R.zipWith (*) (R.map f arr1) (R.map f arr2) arr' = R.force$ R.zipWith (*) (R.map f arr1) (R.map f arr2) - == Advanced techniques == + = Advanced techniques = - === Repa's parallel programming model === + == Repa's parallel programming model == ''Discussion about the gang threads and hooks to help'' ''Discussion about the gang threads and hooks to help'' - === Programming with stencils === + == Programming with stencils == ''Discuss the stencil types model'' ''Discuss the stencil types model'' - [[Category:Libraries]] [[Category:Libraries]]

## Revision as of 09:17, 5 October 2012

Note: This tutorial is for an old version of Repa. The current version (Repa 3.1) has a slightly different API. You can read more about Repa 3 in this paper.

Repa is a Haskell library for high performance, regular, multi-dimensional parallel arrays. All numeric data is stored unboxed and functions written with the Repa combinators are automatically parallel (provided you supply "+RTS -N" on the command line when running the program).

This document provides a tutorial on array programming in Haskell using the repa package.

Note: a companion tutorial to this is provided as the vector tutorial, and is based on the NumPy tutorial.

Authors: Don Stewart.

# 1 Quick Tour

Repa (REgular PArallel arrays) is an advanced, multi-dimensional parallel arrays library for Haskell, with a number of distinct capabilities:

• The arrays are "regular" (i.e. dense, rectangular and store elements all of the same type); and
• Functions may be written that are polymorphic in the shape of the array;
• Many operations on arrays are accomplished by changing only the shape of the array (without copying elements);
• The library will automatically parallelize operations over arrays.

This is a quick start guide for the package. For further information, consult:

# 2 Importing the library

Download the repa package:

$cabal install repa  and import it qualified: import qualified Data.Array.Repa as R  The library needs to be imported qualified as it shares the same function names as list operations in the Prelude. Note: Operations that involve writing new index types for Repa arrays will require the '-XTypeOperators' language extension. For non-core functionality, a number of related packages are available: and example algorithms in: # 3 Index types and shapes Before we can get started manipulating arrays, we need a grasp of repa's notion of array shape. Much like the classic 'array' library in Haskell, repa-based arrays are parameterized via a type which determines the dimension of the array, and the type of its index. However, while classic arrays take tuples to represent multiple dimensions, Repa arrays use a richer type language for describing multi-dimensional array indices and shapes (technically, a heterogeneous snoc list). Shape types are built somewhat like lists. The constructor Z corresponds to a rank zero shape, and is used to mark the end of the list. The :. constructor adds additional dimensions to the shape. So, for example, the shape:  (Z :. 3 :. 2 :. 3)  is the shape of a small 3D array, with shape type  (Z :. Int :. Int :. Int)  The most common dimensions are given by the shorthand names: type DIM0 = Z type DIM1 = DIM0 :. Int type DIM2 = DIM1 :. Int type DIM3 = DIM2 :. Int type DIM4 = DIM3 :. Int type DIM5 = DIM4 :. Int thus, Array U DIM2 Double is the type of a two-dimensional array of unboxed doubles, indexed via Int keys, while Array U Z Double is a zero-dimension object (i.e. a point) holding an unboxed Double. Many operations over arrays are polymorphic in the shape / dimension component. Others require operating on the shape itself, rather than the array. A typeclass, Shape, lets us operate uniformly over arrays with different shape. ## 3.1 Building shapes To build values of shape type, we can use the Z and :. constructors. Open the ghci and import Repa: Prelude> :m +Data.Array.Repa Repa> Z -- the zero-dimension Z For arrays of non-zero dimension, we must give a size. Note: a common error is to leave off the type of the size. Repa> :t Z :. 10 Z :. 10 :: Num head => Z :. head leading to annoying type errors about unresolved instances, such as:  No instance for (Shape (Z :. head0))  To select the correct instance, you will need to annotate the size literals with their concrete type: Repa> :t Z :. (10 :: Int) Z :. (10 :: Int) :: Z :. Int is the shape of 1D arrays of length 10, indexed via Ints. Given an array, you can always find its shape by calling extent. Additional convenience types for selecting particular parts of a shape are also provided (All, Any, Slice etc.) which are covered later in the tutorial. ## 3.2 Working with shapes That one key operation, extent, gives us many attributes of an array: -- Extract the shape of the array extent :: (Shape sh, Source r e) => Array r sh e -> sh So, given a 3x3x3 array, of type Array U DIM3 Int, we can: -- build an array Repa> let x :: Array U DIM3 Int; x = fromListUnboxed (Z :. (3::Int) :. (3::Int) :. (3::Int)) [1..27] Repa> :t x x :: Array U DIM3 Int -- query the extent Repa> extent x ((Z :. 3) :. 3) :. 3 -- compute the rank (number of dimensions) Repa> let sh = extent x Repa> rank sh 3 -- compute the size (total number of elements) > size sh 27 -- extract the elements of the array as a flat vector Repa> toUnboxed x fromList [1,2,3,4,5,6,7,8,9,10 ,11,12,13,14,15,16,17,18,19 ,20,21,22,23,24,25,26,27] :: Data.Vector.Unboxed.Base.Vector Int # 4 Generating arrays New repa arrays ("arrays" from here on) can be generated in many ways, and we always begin by importing the Data.Array.Repa module: $ ghci
GHCi, version 7.4.1: http://www.haskell.org/ghc/  :? for help
Prelude > :m + Data.Array.Repa


They may be constructed from lists, for example. Here is a one dimensional array of length 10, here, given the shape (Z :. 10):

Repa> let inputs = [1..10] :: [Double]
Repa> let x = fromListUnboxed (Z :. (10::Int)) inputs
Repa> x
AUnboxed (Z :. 10) (fromList [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0])

The type of x is inferred as:

Repa> :t x
x :: Array U (Z :. Int) Double

which we can read as "an array of dimension 1, indexed via Int keys, holding elements of type Double stored using unboxed vectors"

We could also have written the type as:

Repa> let x' = fromListUnboxed (Z :. 10 :: DIM1) inputs
Repa> :t x'
x' :: Array U DIM1 Double

The same data may also be treated as a two dimensional array, by changing the shape parameter:

Repa> let x2 = fromListUnboxed (Z :. (5::Int) :. (2::Int)) inputs
Repa> x2
AUnboxed ((Z :. 5) :. 2) (fromList [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0])

which has the type:

Repa> :t x2
x2 :: Array U ((Z :. Int) :. Int) Double

or, as above, if we define it with the type synonym for 2 dimensional Int- indexed arrays, DIM2:

Repa> let x2' = fromListUnboxed (Z :. 5 :. 2 :: DIM2) inputs
Repa> x2'
AUnboxed ((Z :. 5) :. 2) (fromList [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0])
Repa> :t x2'
x2' :: Array U DIM2 Double

## 4.1 Building arrays from vectors

It is also possible to build arrays from unboxed vectors, from the 'vector' package:

fromUnboxed :: (Shape sh, Unbox e) => sh -> Vector e -> Array U sh e

New arrays are built by applying a shape to the vector. For example:

Repa> :m + Data.Vector.Unboxed
Repa Unboxed> let x = fromUnboxed (Z :. (10::Int)) (enumFromN 0 10)
Repa Unboxed> x
AUnboxed (Z :. 10) (fromList [0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0])

is a one-dimensional array of doubles. As usual, we can also impose other shapes:

Repa Unboxed> let x = fromUnboxed (Z :. (3::Int) :. (3::Int)) (enumFromN 0 9)
Repa Unboxed> x
AUnboxed ((Z :. 3) :. 3) (fromList [0.0,1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0])
Repa Unboxed> :t x
x :: Array U ((Z :. Int) :. Int) Double

to create a 3x3 array.

## 4.2 Generating random arrays

The repa-algorithms package lets us generate new arrays with random data:

-- 3d array of Ints, bounded between 0 and 255.
Repa Randomish> randomishIntArray (Z :. (3::Int) :. (3::Int) :. (3::Int)) 0 255 1
AUnboxed (((Z :. 3) :. 3) :. 3) (fromList [217,42,130,200,216,254,67,77,152,
85,140,226,179,71,23,17,152,84,47,17,45,5,88,245,107,214,136])

## 4.3 Reading arrays from files

Using the repa-io package, arrays may be written and read from files in a number of formats:

• as BMP files; and
• in a number of text formats.

with other formats rapidly appearing. An example: to write an 2D array to an ascii file:

Repa> :m +Data.Array.Repa.IO.Matrix
Repa Matrix> let x = fromList (Z :. 5 :. 2 :: DIM2) [1..10]
Repa Matrix> writeMatrixToTextFile "test.dat" x

This will result in a file containing:

MATRIX
5 2
1.0
2.0
3.0
4.0
5.0
6.0
7.0
8.0
9.0
10.0


In turn, this file may be read back in via readMatrixFromTextFile.

Repa Matrix> xx <- readMatrixFromTextFile "test.dat"
Repa Matrix> xx
AUnboxed ((Z :. 5) :. 2) (fromList [1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0])
Repa Matrix> :t xx
xx :: Array U DIM2 Double

To process .bmp files, use Data.Array.Repa.IO.BMP, as follows (currently reading only works for 24 bit .bmp):

Data.Array.Repa.IO.BMP> x <- readImageFromBMP "/tmp/test24.bmp"

as a 3D array of Word8, which can be further processed.

For image IO in many, many formats, use the repa-devil library.

## 4.4 Copying arrays from pointers

You can also generate new repa arrays by copying data from a pointer, using the repa-bytestring package. Here is an example, using copyFromPtrWord8:

import Data.Word
import Foreign.Ptr

import qualified Data.Vector.Storable       as V
import qualified Data.Array.Repa            as R
import Data.Array.Repa
import qualified Data.Array.Repa.ByteString as R

import Data.Array.Repa.IO.DevIL

i, j, k :: Int
(i, j, k) = (255, 255, 4 {-RGBA-})

-- 1d vector, filled with pretty colors
v :: V.Vector Word8
v = V.fromList . take (i * j * k) . cycle $concat [ [ r, g, b, 255 ] | r <- [0 .. 255] , g <- [0 .. 255] , b <- [0 .. 255] ] ptr2repa :: Ptr Word8 -> IO (R.Array R.DIM3 Word8) ptr2repa p = R.copyFromPtrWord8 (Z :. i :. j :. k) p main = do -- copy our 1d vector to a repa 3d array, via a pointer r <- V.unsafeWith v ptr2repa runIL$ writeImage "test.png" r
return ()

This fills a vector, converts it to a pointer, then copies that pointer to a 3d array, before writing the result out as this image:

# 5 Indexing arrays

To access elements in repa arrays, you provide an array and a shape, to access the element:

(!) :: (Shape sh, Elt a) => Array sh a -> sh -> a

> let x = fromList (Z :. (10::Int)) [1..10]
> x ! (Z :. 2)
3.0

Note that we can't give just a bare literal as the shape, even for one-dimensional arrays, :

> x ! 2

No instance for (Num (Z :. Int))
arising from the literal 2'

as the Z type isn't in the Num class, and Haskell's numeric literals are overloaded.

What if the index is out of bounds, though?

> x ! (Z :. 11)
*** Exception: ./Data/Vector/Generic.hs:222 ((!)): index out of bounds (11,10)

an exception is thrown. An alternative is to use indexing functions that return a Maybe:

(!?) :: (Shape sh, Elt a) => Array sh a -> sh -> Maybe a

An example:

> x !? (Z :. 9)
Just 10.0

> x !? (Z :. 11)
Nothing

# 6 Operations on arrays

Besides indexing, there are many regular, list-like operations on arrays. Since many of the names parallel those in the Prelude, we import Repa qualified:

Repa> import qualified Data.Array.Repa as Repa


## 6.1 Maps, zips, filters and folds

We can map over multi-dimensional arrays:

Repa> let x = fromList (Z :. (3::Int) :. (3::Int)) [1..9]
Repa> x
[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0]


since this map conflicts with the definition in the Prelude, we have to use it with the qualifier we requested:

Repa> Repa.map (^2) x
[1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0]


Repa's map leaves the dimension unchanged:

Repa> extent x
(Z :. 3) :. 3
Repa> extent (Repa.map (^2) x)
(Z :. 3) :. 3


A fold reduces the inner dimension of the array:

fold :: (Shape sh, Elt a)
=> (a -> a -> a) -> a -> Array (sh :. Int) a -> Array sh a


The x defined above was a 2D array:

Repa> extent x
(Z :. 3) :. 3


but if we sum each row:

Repa> Repa.fold (+) 0 x
[6.0,15.0,24.0]


we get a 1D array instead:

Repa> extent (Repa.fold (+) 0 x)
Z :. 3


Similarly, if y is a (3 x 3 x 3) 3D array:

Repa> let y = fromList ((Z :. 3 :. 3 :. 3) :: DIM3) [1..27]
Repa> y
[1.0,2.0,3.0,4.0,5.0,6.0,7.0,8.0,9.0,10.0,11.0,12.0,13.0,14.0,15.0,16.0,17.0,18.0,19.0,20.0,21.0,22.0,23.0,24.0,25.0,26.0,27.0]



we can fold over the inner dimension:

Repa> Repa.fold (+) 0 y
[6.0,15.0,24.0,33.0,42.0,51.0,60.0,69.0,78.0]


yielding a 2D (3 x 3) array in place of our 3D (3 x 3 x 3) array:

Repa> extent y
((Z :. 3) :. 3) :. 3
Repa> extent (Repa.fold (+) 0 y)
(Z :. 3) :. 3


Two arrays may be combined via zipWith:

zipWith :: (Shape sh, Elt b, Elt c, Elt a) =>
(a -> b -> c) -> Array sh a -> Array sh b -> Array sh c


an example:

Repa> Repa.zipWith (*) x x
[1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0]
Repa> extent it
(Z :. 3) :. 3

Repa> Repa.zipWith (*) y y
[1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0,
100.0,121.0,144.0,169.0,196.0,225.0,256.0,289.0,324.0,
361.0,400.0,441.0,484.0,529.0,576.0,625.0,676.0,729.0] --reformatted
Repa> extent it
((Z :. 3) :. 3) :. 3


## 6.2 Mapping, with indices

A very powerful operator is traverse, a parallel array traversal which also supplies the current index:

traverse :: (Shape sh, Shape sh', Elt a)
=> Array sh a
-- Source array
-> (sh -> sh')
-- Function to produce the extent of the result.
-> ((sh -> a) -> sh' -> b)
-- Function to produce elements of the result.
-- It is passed a lookup function to
-- get elements of the source.
-> Array sh' b

This is quite a compicated type, because it is very general. Let's take it apart. The first argument is the source array, which is obvious. The second argument is a function that transforms the shape of the input array to yield the output array. So if the arrays are the same size, this function is id. It might grow or resize the shape in other ways.

Finally, the 3rd argument is where the magic is. Given an index, return a new element, and you also get a lookup function which when applied yields the current element.

So we see this generalizes map to support indexes, and optional inspection of the current element. Let's try some examples:

$ghci GHCi, version 7.0.3: http://www.haskell.org/ghc/ :? for help *> :m + Data.Array.Repa *> :m + Data.Array.Repa.Algorithms.Randomish *> let a :: Array DIM3 Int; a = fromList (Z :. (3::Int) :. (3::Int) :. (3::Int)) [1..27] *> a [1,2,3,4,5,6,7,8,9 ,10,11,12,13,14,15,16,17,18 ,19,20,21,22,23,24,25,26,27] -- Keeping the shape the same, and just overwriting elements -- Use traverse to set all elements to their x axis: *> traverse a id (\_ (Z :. i :. j :. k) -> i) [0,0,0,0,0,0,0,0,0 ,1,1,1,1,1,1,1,1,1 ,2,2,2,2,2,2,2,2,2] -- Shuffle elements around, based on their index. -- Rotate elements by swapping elements from rotated locations: > traverse a id (\f (Z :. i :. j :. k) -> f (Z :. j :. k :. i)) [1,4,7,10,13,16,19,22,25 ,2,5,8,11,14,17,20,23,26 ,3,6,9,12,15,18,21,24,27] The documentation on traverse provides further information. ## 6.3 Numeric operations: negation, addition, subtraction, multiplication Repa arrays are instances of the Num. This means that operations on numerical elements are lifted automagically onto arrays of such elements. For example, (+) on two double values corresponds to element-wise addition, (+), of the two arrays of doubles: > let x = fromList (Z :. (10::Int)) [1..10] > x + x [2.0,4.0,6.0,8.0,10.0,12.0,14.0,16.0,18.0,20.0]  Other operations from the Num class work just as well: > -x [-1.0,-2.0,-3.0,-4.0,-5.0,-6.0,-7.0,-8.0,-9.0,-10.0] > x ^ 3 [1.0,8.0,27.0,64.0,125.0,216.0,343.0,512.0,729.0,1000.0]  > x * x [1.0,4.0,9.0,16.0,25.0,36.0,49.0,64.0,81.0,100.0]  # 7 Changing the shape of an array One of the main advantages of repa-style arrays over other arrays in Haskell is the ability to reshape data without copying. This is achieved via *index-space transformations*. An example: transposing a 2D array (this example taken from the repa paper). First, the type of the transformation: transpose2D :: Elt e => Array DIM2 e -> Array DIM2 e  Note that this transform will work on DIM2 arrays holding any elements. Now, to swap rows and columns, we have to modify the shape: transpose2D a = backpermute (swap e) swap a where e = extent a swap (Z :. i :. j) = Z :. j :. i  The swap function reorders the index space of the array. To do this, we extract the current shape of the array, and write a function that maps the index space from the old array to the new array. That index space function is then passed to backpermute which actually constructs the new array from the old one. backpermute generates a new array from an old, when given the new shape, and a function that translates between the index space of each array (i.e. a shape transformer). backpermute :: (Shape sh, Shape sh', Elt a) => sh' -> (sh' -> sh) -> Array sh a -> Array sh' a  Note that the array created is not actually evaluated (we only modified the index space of the array). Transposition is such a common operation that it is provided by the library: transpose :: (Shape sh, Elt a) => Array ((sh :. Int) :. Int) a -> Array ((sh :. Int) :. Int)  the type indicate that it works on the lowest two dimensions of the array. Other operations on index spaces include taking slices and joining arrays into larger ones. # 8 Examples Following are some examples of useful functions that exercise the API. ## 8.1 Example: Rotating an image with backpermute Flip an image upside down: import System.Environment import Data.Word import Data.Array.Repa hiding ((++)) import Data.Array.Repa.IO.DevIL main = do [f] <- getArgs runIL$ do
writeImage ("flip-"++f) (rot180 v)

rot180 :: Array DIM3 Word8 -> Array DIM3 Word8
rot180 g = backpermute e flop g
where
e@(Z :. x :. y :. _)   = extent g

flop (Z :. i         :. j         :. k) =
(Z :. x - i - 1 :. y - j - 1 :. k)

Running this:

   $ghc -O2 --make A.hs$ ./A haskell.jpg


Results in:

## 8.2 Example: matrix-matrix multiplication

A more advanced example from the Repa paper: matrix-matrix multiplication: the result of matrix multiplication is a matrix whose elements are found by multiplying the elements of each row from the first matrix by the associated elements of the same column from the second matrix and summing the result.

if $A=\begin{bmatrix}a&b\\c&d\end{bmatrix}$ and $B=\begin{bmatrix}e&f\\g&h\end{bmatrix}$

then

$AB=\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}e&f\\g&h\end{bmatrix}=\begin{bmatrix}ae+bg&af+bh\\ce+dg&cf+dh\end{bmatrix}$

So we take two, 2D arrays and generate a new array, using our transpose function from earlier:

mmMult :: (Num e, Elt e)
=> Array DIM2 e
-> Array DIM2 e
-> Array DIM2 e

mmMult a b = sum (zipWith (*) aRepl bRepl)
where
t   = transpose2D b
aRepl = extend (Z :.All :.colsB :.All) a
bRepl = extend (Z :.rowsA :.All :.All) t
(Z :.colsA :.rowsA) = extent a
(Z :.colsB :.rowsB) = extent b

The idea is to expand both 2D argument arrays into 3D arrays by replicating them across a new axis. The front face of the cuboid that results represents the array a, which we replicate as often as b has columns (colsB), producing aRepl.

The top face represents t (the transposed b), which we replicate as often as a has rows (rowsA), producing bRepl,. The two replicated arrays have the same extent, which corresponds to the index space of matrix multiplication

Optimized implementations of this function are available in the repa-algorithms package.

## 8.3 Example: parallel image desaturation

To convert an image from color to greyscale, we can use the luminosity method to average RGB pixels into a common grey value, where the average is weighted for human perception of green.

The formula for luminosity is 0.21 R + 0.71 G + 0.07 B.

We can write a parallel image desaturation tool using repa and the repa-devil image library:

import Data.Array.Repa.IO.DevIL
import Data.Array.Repa hiding ((++))
import Data.Word
import System.Environment

--
-- Read an image, desaturate, write out with new name.
--
main = do
[f] <- getArgs
runIL $do a <- readImage f let b = traverse a id luminosity writeImage ("grey-" ++ f) b And now the luminosity transform itself, which averages the 3 RGB colors based on perceived weight: -- -- (Parallel) desaturation of an image via the luminosity method. -- luminosity :: (DIM3 -> Word8) -> DIM3 -> Word8 luminosity _ (Z :. _ :. _ :. 3) = 255 -- alpha channel luminosity f (Z :. i :. j :. _) = ceiling$ 0.21 * r + 0.71 * g + 0.07 * b
where
r = fromIntegral $f (Z :. i :. j :. 0) g = fromIntegral$ f (Z :. i :. j :. 1)
b = fromIntegral $f (Z :. i :. j :. 2) And that's it! The result is a parallel image desaturator, when compiled with $ ghc -O -threaded -rtsopts --make A.hs -fforce-recomp


which we can run, to use two cores:

$time ./A sunflower.png +RTS -N2 -H ./A sunflower.png +RTS -N2 -H 0.19s user 0.03s system 135% cpu 0.165 total  Given an image like this: The desaturated result from Haskell: # 9 Optimising Repa programs ## 9.1 Fusion, and why you need it Repa depends critically on array fusion to achieve fast code. Fusion is a fancy name for the combination of inlining and code transformations performed by GHC when it compiles your program. The fusion process merges the array filling loops defined in the Repa library, with the "worker" functions that you write in your own module. If the fusion process fails, then the resulting program will be much slower than it needs to be, often 10x slower an equivalent program using plain Haskell lists. On the other hand, provided fusion works, the resulting code will run as fast as an equivalent cleanly written C program. Making fusion work is not hard once you understand what's going on. ## 9.2 The force function has the loops Suppose we have the following binding:  arr' = R.force$ R.map (\x -> x + 1) arr


The right of this binding will compile down to code that first allocates the result array arr', then iterates over the source array arr, reading each element in turn and adding one to it, then writing to the corresponding element in the result.

Importantly, the code that does the allocation, iteration and update is defined as part of the force function. This forcing code has been written to break up the result into several chunks, and evaluate each chunk with a different thread. This is what makes your code run in parallel. If you do not use force then your code will be slow and not run in parallel.

## 9.3 Delayed and Manifest arrays

In the example from the previous section, think of the R.map (\x -> x + 1) arr expression as a specification for a new array. In the library, this specification is referred to as a delayed array. A delayed array is represented as a function that takes an array index, and computes the value of the element at that index.

Applying force to a delayed array causes all elements to be computed in parallel. The result of a force is referred to as a manifest array. A manifest array is a "real" array represented as a flat chunk of memory containing array elements.

All Repa array operators will accept both delayed and manifest arrays. However, if you index into a delayed array without forcing it first, then each indexing operation costs a function call. It also recomputes the value of the array element at that index.

## 9.4 Shells and Springs

Here is another way to think about Repa's approach to array fusion. Suppose we write the following binding:

arr' = R.force $R.map (\x -> x * 2)$ R.map (\x -> x + 1) arr


Remember from the previous section, that the result of each of the applications of R.map is a delayed array. A delayed array is not a "real", manifest array, it's just a shell that contains a function to compute each element. In this example, the two worker functions correspond to the lambda expressions applied to R.map.

When GHC compiles this example, the two worker functions are fused into a fresh unfolding of the parallel loop defined in the code for R.force. Imagine holding R.force in your left hand, and squashing the calls to R.map into it, like a spring. Doing this breaks all the shells, and you end up with the worker functions fused into an unfolding of R.force.

## 9.5 INLINE worker functions

Consider the following example:

f x  = x + 1
arr' = R.force $R.zipWith (*) (R.map f arr1) (R.map f arr2)  During compilation, we need GHC to fuse our worker functions into a fresh unfolding of R.force. In this example, fusion includes inlining the definition of f. If f is not inlined, then the performance of the compiled code will be atrocious. It will perform a function call for each application of f, where it really only needs a single machine instruction to increment the x value. Now, in general, GHC tries to avoid producing binaries that are "too big". Part of this is a heuristic that controls exactly what functions are inlined. The heuristic says that a function may be inlined only if it is used once, or if its definition is less than some particular size. If neither of these apply, then the function won't be inlined, killing performance. For Repa programs, as fusion and inlining has such a dramatic effect on performance, we should absolutely not rely on heuristics to control whether or not this inlining takes place. If we rely on a heuristic, then even if our program runs fast today, if this heuristic is ever altered then some functions that used to be inlined may no longer be. The moral of the story is to attach INLINE pragmas to all of your client functions that compute array values. This ensures that these critical functions will be inlined now, and forever. {-# INLINE f #-} f x = x + 1  arr' = R.force$ R.zipWith (*) (R.map f arr1) (R.map f arr2)
`