Haskell Symposium 2013

Boston, MA, USA
23–24 September 2013
(directly before ICFP)

Schedule and abstracts

Monday, 23 September 2013
Session chair: Kathleen Fisher
09:00–10:00 John Launchbury
Haskell in Practice at Galois (invited talk)
10:00–10:30 Jean-Philippe Bernardy and Nicolas Pouillard
Names For Free—Polymorphic Views of Names and Binders

We propose a novel technique to represent names and binders in Haskell. The dynamic (run-time) representation is based on de Bruijn indices, but it features an interface to write and manipulate variables conviently, using Haskell-level lambdas and variables. The key idea is to use rich types: a subterm with an additional free variable is viewed either as ∀v.v→Term(a+v) or ∃v.v×Term(a+v) depending on whether it is constructed or analysed. We demonstrate on a number of examples how this approach permits to express term construction and manipulation in a natural way, while retaining the good properties of representations based on de Bruijn indices.

Session chair: Ohad Kammar
11:00–11:30 Oleg Kiselyov, Amr Sabry, and Cameron Swords
Extensible Effects: An Alternative to Monad Transformers

We design and implement a library that solves the long-standing problem of combining effects without imposing restrictions on their interactions (such as static ordering). Effects arise from interactions between a client and an effect handler (interpreter); interactions may vary throughout the program and dynamically adapt to execution conditions. Existing code that relies on monad transformers may be used with our library with minor changes, gaining efficiency over long monad stacks. In addition, our library has greater expressiveness, allowing for practical idioms that are inefficient, cumbersome, or outright impossible with monad transformers.

Our alternative to a monad transformer stack is a single monad, for the coroutine-like communication of a client with its handler. Its type reflects possible requests, i.e., possible effects of a computation. To support arbitrary effects and their combinations, requests are values of an extensible union type, which allows adding and, notably, subtracting summands. Extending and, upon handling, shrinking of the union of possible requests is reflected in its type, yielding a type-and-effect system for Haskell. The library is lightweight, generalizing the extensible exception handling to other effects and accurately tracking them in types.

11:30–12:30 Ohad Kammar, Sam Lindley, Oleg Kiselyov, Daan Leijen, Tom Schrijvers, Tarmo Uustalu, and Philip Wadler
The Future of Effects in Haskell (panel)

Haskell provides rich abstractions for augmenting code with user defined effects, notably monads and monad transformers. However, the standard abstractions suffer from various shortcomings relating to readability, modularity, and compositionality. Recently, a number of researchers have explored alternative approaches to effectful programming that aim to address such shortcomings. The goal of this panel is to discuss the merits and drawbacks of these alternative approaches, and how best to incorporate them into Haskell practice.

Session chair: Neil Sculthorpe
14:00–14:30 Atze van der Ploeg
Monadic Functional Reactive Programming

Functional Reactive Programming (FRP) is a way to program reactive systems in functional style, eliminating many of the problems that arise from imperative techniques. In this paper, we present an alternative FRP formulation that is based on the notion of a reactive computation: a monadic computation which may require the occurrence of external events to continue. A signal computation is a reactive computation that may also emit values. In contrast to signals in other FRP formulations, signal computations can end, leading to a monadic interface for sequencing signal phases. This interface has several advantages: routing is implicit, sequencing signal phases is easier and more intuitive than when using the switching combinators found in other FRP approaches, and dynamic lists require much less boilerplate code. In other FRP approaches, either the entire FRP expression is re-evaluated on each external stimulus, or impure techniques are used to prevent redundant re-computations. We show how Monadic FRP can be implemented straightforwardly in a purely functional way while preventing redundant re-computations.

14:30–14:50 Niki Vazou, Eric Seidel, and Ranjit Jhala
Liquid Types for Haskell (demo)

We present LiquidHaskell, a verifier for Haskell programs which uses Liquid Types to reduce the verification of higher-order, polymorphic, recursive programs over complex data types, into first-order Horn Clauses over integers, booleans and uninterpretated functions, which are then solved using classical predicate abstraction. In this demo proposal, we present an overview of this approach, and describe how we handle Haskell specific features like type classes, algebraic data structures and laziness.

14:50–15:20 Richard Bird, Jeremy Gibbons, Stefan Mehner, Tom Schrijvers, and Janis Voigtländer
Understanding Idiomatic Traversals Backwards and Forwards

We present new ways of reasoning about a particular class of effectful Haskell programs, namely those expressed as idiomatic traversals. Starting out with a specific problem about labelling and unlabelling binary trees, we extract a general inversion law, applicable to any monad, relating a traversal over the elements of an arbitrary traversable type to a traversal that goes in the opposite direction. This law can be invoked to show that, in a suitable sense, unlabelling is the inverse of labelling. The inversion law, as well as a number of other properties of idiomatic traversals, is a corollary of a more general theorem characterising traversable functors as finitary containers: an arbitrary traversable object can be decomposed uniquely into shape and contents, and traversal be understood in terms of those. Proof of the theorem involves the properties of traversal in a special idiom related to the free applicative functor.

Session chair: Andreas Abel
16:00–16:30 Sam Lindley and Conor McBride
Hasochism: The Pleasure and Pain of Dependently Typed Haskell Programming

Haskell's type system has outgrown its Hindley-Milner roots to the extent that it now stretches to the basics of dependently typed programming. In this paper, we collate and classify techniques for programming with dependent types in Haskell, and contribute some new ones. In particular, through extended examples—merge-sort and rectangular tilings—we show how to exploit Haskell's constraint solver as a theorem prover, delivering code which, as Agda programmers, we envy. We explore the compromises involved in simulating variations on the theme of the dependent function space in an attempt to help programmers put dependent types to work, and to inform the evolving language design both of Haskell and of dependently typed languages more broadly.

16:30–17:00 Joe Leslie-Hurd
Maintaining Verified Software

Maintaining software in the face of evolving dependencies is a challenging problem, and in addition to good release practices there is a need for automatic dependency analysis tools to avoid errors creeping in. Verified software reveals more semantic information in the form of mechanized proofs of functional specifications, and this can be used for dependency analysis. In this paper we present a scheme for automatic dependency analysis of verified software, which for each program checks that the collection of installed libraries is sufficient to guarantee its functional correctness. We illustrate the scheme with a case study of Haskell packages verified in higher order logic. The dependency analysis reduces the burden of maintaining verified Haskell packages by automatically computing version ranges for the packages they depend on, such that any combination provides the functionality required for correct operation.

Session chair: Aaron Contorer
17:30–18:30 Aaron Contorer, Andres Löh, Bryan O'Sullivan, Dmitriy Traytel, and Stephanie Weirich
Teaching Haskell in Academia and Industry (panel)

Many professional imperative programmers and undergraduate students share a common belief: Haskell is hard learn. Yet there are also some great success stories. Is Haskell indeed harder to learn than imperative programming, or just different? What is needed to lower the barriers to learning and adopting Haskell? What techniques and tools can developers, managers, and educators employ to accelerate and spread Haskell proficiency?

More precisely, the panel will try to find answers to the following questions: What makes a good syllabus for Haskell? Should Haskell be taught as an alternative approach, as a peer to imperative programming, as a replacement, as a complement? Are teaching techniques applied in industry also suitable for academia and vice versa? What tools for instructors to support teaching are there or are missing? What tools for students to learn through self-guided or online study are there or are missing?

Tuesday, 24 September 2013
09:00–09:10 Chung-chieh Shan
Program Chair Report
Session chair: Lennart Augustsson
09:10–09:40 Hai Liu, Neal Glew, Leaf Petersen, and Todd Anderson
The Intel Labs Haskell Research Compiler

The Glasgow Haskell Compiler (GHC) is a well supported optimizing compiler for the Haskell programming language, along with its own extensions to the language and libraries. Haskell's lazy semantics imposes a runtime model which is in general difficult to implement efficiently. GHC achieves good performance across a wide variety of programs via aggressive optimization taking advantage of the lack of side effects, and by targeting a carefully tuned virtual machine. The Intel Labs Haskell Research Compiler uses GHC as a frontend, but provides a new whole-program optimizing backend by compiling the GHC intermediate representation to a relatively generic functional language compilation platform. We found that GHC's external Core language was relatively easy to use, but reusing GHC's libraries and achieving full compatibility were harder. For certain classes of programs, our platform provides substantial performance benefits over GHC alone, performing 2x faster than GHC with the LLVM backend on selected modern performance-oriented benchmarks; for other classes of programs, the benefits of GHC's tuned virtual machine continue to outweigh the benefits of more aggressive whole program optimization. Overall we achieve parity with GHC with the LLVM backend. In this paper, we describe our Haskell compiler stack, its implementation and optimization approach, and present benchmark results comparing it to GHC.

09:40–10:10 Andreas Voellmy, Junchang Wang, Paul Hudak, and Kazuhiko Yamamoto
Mio: A High-Performance Multicore IO Manager for GHC

Haskell threads provide a key, lightweight concurrency abstraction to simplify the programming of important network applications such as web servers and software-defined network (SDN) controllers. The flagship Glasgow Haskell Compiler (GHC) introduces a run-time system (RTS) to achieve a high-performance multicore implementation of Haskell threads, by introducing effective components such as a multicore scheduler, a parallel garbage collector, an IO manager, and efficient multicore memory allocation. Evaluations of the GHC RTS, however, show that it does not scale well on multicore processors, leading to poor performance of many network applications that try to use lightweight Haskell threads. In this paper, we show that the GHC IO manager, which is a crucial component of the GHC RTS, is the scaling bottleneck. Through a series of experiments, we identify key data structure, scheduling, and dispatching bottlenecks of the GHC IO manager. We then design a new multicore IO manager named Mio that eliminates all these bottlenecks. Our evaluations show that the new Mio manager improves realistic web server throughput by 6.5x and reduces expected web server response time by 5.7x. We also show that with Mio, McNettle (an SDN controller written in Haskell) can scale effectively to 40+ cores, reach a throughput of over 20 million new requests per second on a single machine, and hence become the fastest of all existing SDN controllers.

Session chair: Simon Thompson
10:30–11:30 Simon Thompson, Bastiaan Heeren, Anil Madhavapeddy, Guy Steele, Bryan O'Sullivan, and Simon Peyton Jones
Haskell and GHC: Too Big to Fail? (panel)

Haskell's future hinges on managing the sheer weight and complexity of the language and its standard libraries. The role of GHC and its extensions also deserves special concern. As our community grows in size and diversity, this panel will solicit opinions and advice from experienced designers and implementors of Haskell and other widely used functional languages.

Session chair: Janis Voigtländer
11:50–12:10 Luite Stegeman
GHCJS, Concurrent Haskell in the Browser (demo)

GHCJS is a Haskell to JavaScript compiler that uses the GHC API. GHCJS implements many modern Haskell features, including lightweight threading, STM and asynchronous exceptions. The GHCJS runtime has specific provisions to deal with multithreaded Haskell code in the singlethreaded JavaScript runtime.

12:10–12:30 Susumu Katayama
MagicHaskeller on the Web: Automated Programming as a Service (demo)

Our Web-based automatic programming tool, named MagicHaskeller on the Web, can help casual programming in Haskell. We will show how simple to use the tool is, and then evaluates its ability.

Session chair: Gabriele Keller
14:00–14:30 Johan Ankner and Josef Svenningsson
An EDSL Approach to High Performance Haskell Programming

This paper argues for a new methodology for writing high performance Haskell programs by using Embedded Domain Specific Languages.

We exemplify the methodology by describing a complete library, meta-repa, which is a reimplementation of parts of the repa library. The paper describes the implementation of meta-repa and contrasts it with the standard approach to writing high performance libraries. We conclude that even though the embedded language approach has an initial cost of defining the language and some syntactic overhead it gives a more tailored programming model, stronger performance guarantees, better control over optimizations, simpler implementation of fusion and inlining and allows for moving type level programming down to value level programming in some cases. We also provide benchmarks showing that meta-repa is as fast, or faster, than repa.

Furthermore, meta-repa also includes push arrays and we demonstrate their usefulness for writing certain high performance kernels such as FFT.

14:30–15:00 Koen Claessen and Michał Pałka
Splittable Pseudorandom Number Generators Using Cryptographic Hashing

We propose a new splittable pseudorandom number generator (PRNG) based on a cryptographic hash function. Splittable PRNGs, in contrast to linear PRNGs, allow the creation of two (seemingly) independent generators from a given random number generator. Splittable PRNGs are very useful for structuring purely functional programs, as they avoid the need for threading around state. We show that the currently known and used splittable PRNGs are either not efficient enough, have inherent flaws, or lack formal arguments about their randomness. In contrast, our proposed generator can be implemented efficiently, and comes with a formal statements and proofs that quantify how 'random' the results are that are generated. The provided proofs give strong randomness guarantees under assumptions commonly made in cryptography.

Session chair: Norman Ramsey
15:20–15:50 Ben Lippmeier, Manuel Chakravarty, Gabriele Keller, and Amos Robinson
Data Flow Fusion with Series Expressions in Haskell

Existing approaches to array fusion can deal with straight-line producer consumer pipelines, but cannot fuse branching data flows where a generated array is consumed by several different consumers. Branching data flows are common and natural to write, but a lack of fusion leads to the creation of an intermediate array at every branch point. We present a new array fusion system that handles branches, based on Waters's series expression framework, but extended to work in a functional setting. Our system also solves a related problem in stream fusion, namely the introduction of duplicate loop counters. We demonstrate speedup over existing fusion systems for several key examples.

15:50–16:20 Peter Wortmann and David Duke
Causality of Optimized Haskell: What is Burning Our Cycles?

Profiling real-world Haskell programs is hard, as compiler optimizations make it tricky to establish causality between the source code and program behavior. In this paper we attack the root issue by performing a causality analysis of functional programs under optimization. We apply our findings to build a novel profiling infrastructure on top of the Glasgow Haskell Compiler, allowing for performance analysis even of aggressively optimized programs.

16:40–17:10 Mario Blažević
Adding Structure to Monoids

This paper presents the rationale and design of monoid-subclasses. This Haskell library consists of a collection of type classes that generalize the interface of several common data types, most importantly those used to represent strings. We demonstrate that the mathematical theory behind monoid-subclasses can bring substantial practical benefits to the Haskell library ecosystem by generalizing attoparsec, one of the most popular Haskell parsing libraries.

http://www.haskell.org/haskell-symposium/2013/abstracts.html · haskell2013@easychair.org