Difference between revisions of "GHC/Data Parallel Haskell/References"

From HaskellWiki
Jump to navigation Jump to search
Line 10: Line 10:
   
 
Implementing nested data parallelism by program transformation:
 
Implementing nested data parallelism by program transformation:
  +
* [http://www.cse.unsw.edu.au/~chak/papers/SPCS08.html Type Checking with Open Type Functions.] Tom Schrijvers, Simon Peyton-Jones, Manuel M. T. Chakravarty, and Martin Sulzmann. In ''Proceedings of ICFP 2008 : The 13th ACM SIGPLAN International Conference on Functional Programming'', pages 51-62, ACM Press, 2008. '''''Summary:''''' ''This paper describes type checking for type synonym families.''
  +
 
* [http://www.cse.unsw.edu.au/~chak/papers/CLPK07.html Partial Vectorisation of Haskell Programs.] Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, and Gabriele Keller. In ''DAMP 2008: Workshop on Declarative Aspects of Multicore Programming,'' 2008. '''''Summary:''''' ''It addresses the problem that not all code in a program can and should be vectorised – e.g., we do not want to vectorise code involving side effects, such as I/O. To enable mixing vectorised and non-vectorised code, the paper introduces a notion of partial vectorisation of program code.''
 
* [http://www.cse.unsw.edu.au/~chak/papers/CLPK07.html Partial Vectorisation of Haskell Programs.] Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, and Gabriele Keller. In ''DAMP 2008: Workshop on Declarative Aspects of Multicore Programming,'' 2008. '''''Summary:''''' ''It addresses the problem that not all code in a program can and should be vectorised – e.g., we do not want to vectorise code involving side effects, such as I/O. To enable mixing vectorised and non-vectorised code, the paper introduces a notion of partial vectorisation of program code.''
   

Revision as of 13:18, 2 December 2008

References related to Data Parallel Haskell

Data Parallel Haskell:

  • Harnessing the Multicores: Nested Data Parallelism in Haskell. Simon Peyton Jones, Roman Leshchinskiy, Gabriele Keller, and Manuel M. T. Chakravarty. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2008), IBFI, Schloss Dagstuhl, 2008. Summary: This paper gives a comprehensive account of the vectorisation of Haskell programs and briefly outlines how vectorisation fits together with the other components of Data Parallel Haskell.
  • Data Parallel Haskell: a status report. Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, Gabriele Keller, and Simon Marlow. In DAMP 2007: Workshop on Declarative Aspects of Multicore Programming, ACM Press, 2007. Summary: Illustrates our approach to implementing nested data parallelism by way of the example of multiplying a sparse matrix with a vector and gives first performance figures. It also includes an overview over the implementation and references to our previous work in the area. Here are the slides of a talk about the paper.
  • Nepal -- Nested Data-Parallelism in Haskell. Manuel M. T. Chakravarty, Gabriele Keller, Roman Lechtchinsky, and Wolf Pfannenstiel. In Euro-Par 2001: Parallel Processing, 7th International Euro-Par Conference, Springer-Verlag, LNCS 2150, pages 524-534, 2001. Summary: Illustrates the language design of integrating support for nested data parallelism into Haskell; in particular, the semantics of parallel arrays and the idea of distinguishing between the parallel and sequential components of a data structure and algorithm by type are introduced. These concepts are illustrated by a parallel version of quicksort, the Barnes-Hut algorithm for solving the n-body problem, and Wang's algorithm to solving tridiagonal systems of linear equations.


Implementing nested data parallelism by program transformation:

  • Type Checking with Open Type Functions. Tom Schrijvers, Simon Peyton-Jones, Manuel M. T. Chakravarty, and Martin Sulzmann. In Proceedings of ICFP 2008 : The 13th ACM SIGPLAN International Conference on Functional Programming, pages 51-62, ACM Press, 2008. Summary: This paper describes type checking for type synonym families.
  • Partial Vectorisation of Haskell Programs. Manuel M. T. Chakravarty, Roman Leshchinskiy, Simon Peyton Jones, and Gabriele Keller. In DAMP 2008: Workshop on Declarative Aspects of Multicore Programming, 2008. Summary: It addresses the problem that not all code in a program can and should be vectorised – e.g., we do not want to vectorise code involving side effects, such as I/O. To enable mixing vectorised and non-vectorised code, the paper introduces a notion of partial vectorisation of program code.
  • Higher Order Flattening. Roman Leshchinskiy, Manuel M. T. Chakravarty, and Gabriele Keller. In Third International Workshop on Practical Aspects of High-level Parallel Programming (PAPP 2006), Springer-Verlag, LNCS 3992, 2006. Summary: This paper explains how the flattening transformation can be extended to higher-order functions by way of closure conversion and closure inspection. This method was one of the central contributions of Roman Leshchinskiy's PhD thesis.
  • Associated Types with Class. Manuel M. T. Chakravarty, Gabriele Keller, Simon Peyton Jones, and Simon Marlow. In Proceedings of The 32nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL'05), pages 1-13, ACM Press, 2005. Summary: Introduces the idea and type theory of type-indexed data types as type members of Haskell type classes. These associated data types are an essential element of our optimising, non-parametric array implementation.
  • More Types for Nested Data Parallel Programming. Manuel M. T. Chakravarty and Gabriele Keller. In Proceedings of the Fifth ACM SIGPLAN International Conference on Functional Programming, pages 94-105, ACM Press, 2000. Summary: Extends Blelloch's flattening transformation for nested data parallelism to languages supporting full algebraic data types, including sum types and recursive types. This paper extends flattening for recursive types as introduced in Gabriele Keller's PhD thesis.
  • On the Distributed Implementation of Aggregate Data Structures by Program Transformation. Gabriele Keller and Manuel M. T. Chakravarty. In Fourth International Workshop on High-Level Parallel Programming Models and Supportive Environments (HIPS'99), pages 108-122, Springer Verlag, LNCS 1586, 1999. Summary: Presents the idea of supporting transformation-based optimisations, and in particular array and communication fusion, by distinguishing between distributed and local data by type. This method was one of the main contributions of Gabriele Keller's PhD thesis.
  • An approach to fast arrays in Haskell, Manuel M. T. Chakravarty and Gabriele Keller. In Johan Jeuring and Simon Peyton Jones, editors, lecture notes for The Summer School and Workshop on Advanced Functional Programming 2002. LNCS 2638, Springer-Verlag, pages 27-58, 2003. Summary: This tutorial paper illustrates the main challenges in implementing sequential high-performance arrays in a lazy functional language. It includes a step-by-step illustration of first-order flattening, discusses implementing non-parametric arrays without associated types, and illustrates a simple approach to equational array fusion. (Data Parallel Haskell uses a more powerful fusion framework based on stream fusion.)


Other languages with nested data parallelism:

  • Programming Parallel Algorithms. Guy E. Blelloch. In Communications of the ACM, 39(3), March, 1996. Summary: This seminal article illustrates the flexibility and high level of abstraction of nested data parallelism. It also describes the model's language-based cost model.
  • NESL: A Parallel Programming Language. Summary: This is the main NESL page with many links to programming examples and implementation techniques. The work on NESL did lay the foundations for the programming model of nested data parallelism and is the one most influential precursors of our work.
  • The Manticore Project. Summary: This is the main page of the Manticore project with many further links. Manticore is a recent effort to develop a heterogeneous parallel programming language targeting multi-core processors, which also includes nested data parallelism in the style of NESL and Data Parallel Haskell.
  • Publications of the Proteus project. Summary: Proteus was an effort to develop a heterogeneous parallel language during the high-performance computing era. Most of the actual work on Proteus was actually concerned with its nested data parallel sub-language.