This is the 14th edition of the Haskell Communities and Activities Report. There has been a transition in editorship which went very smoothly, also thanks to the many responsive contributors who where as helpful to the new editor as they have been to Andres during the last years.
As usual, entries that are completely new (or have been revived after having disappeared temporarily) are formatted using a blue background. Updated entries have a header with a blue background. In most cases of entries that have not been changed for a year or longer, these have been dropped.
The report has been somewhat restructured, so if you do not find your favourite entry at once: please check the table of contents again; maybe the entry is just elsewhere than where you last saw it. If for some entry you think a different place in the report would be more appropriate, please give sign for next time. Also, to simplify organisation, only one author is now assigned to every entry. Where previously several authors were given, the entry has been assigned to the one sending it in, and any further given authors have been added as participants. Of course, authorship can be reassigned with the next edition.
By now, the report has reached a considerable size. This does not only have to do with the pleasantly high number of entries contained, but also with the fact that many of them are growing “through accumulation”. To counter this a bit, and encourage focusing on describing the most recent activities, the next edition of HCAR will have a (liberal) length limit on entries, except for a few projects of central importance (the compilers, Cabal and Hackage, …). More details around November — watch the mailing lists for announcements.
But now enjoy the report and see what other Haskellers have been up to lately. Any kind of feedback is of course very welcome <hcar at haskell.org>.
Andres Löh, Universiteit Utrecht, The Netherlands
Janis Voigtländer, Technische Universität Dresden, Germany
HaskellWiki is a MediaWiki installation running on haskell.org, including the haskell.org “front page”. Anyone can create an account and edit and create pages. Examples of content include:
We encourage people to create pages to describe and advertise their own Haskell projects, as well as add to and improve the existing content. All content is submitted and available under a “simple permissive” license (except for a few legacy pages).
In addition to HaskellWiki, the haskell.org website hosts some ordinary HTTP directories. The machine also hosts mailing lists. There is plenty of space and processing power for just about anything that people would want to do there: if you have an idea for which HaskellWiki is insufficient, contact the maintainers, John Peterson and Olaf Chitil, to get access to this machine.
The #haskell IRC channel is a real-time text chat where anyone can join to discuss Haskell. The channel has continued to grow in the last six months, now averaging around 420 users, with a record 479 users (up from 436 six months ago). It is one of the largest channels on freenode. The irc channel is home to hpaste and lambdabot (→6.8.1), two useful Haskell bots. Point your IRC client to irc.freenode.net and join the #haskell conversation!
For non-English conversations about Haskell there are now:
Related Haskell channels are now emerging, including:
There are plenty of academic papers about Haskell and plenty of informative pages on the HaskellWiki (→1.1). Unfortunately, there is not much between the two extremes. That is where The Monad.Reader tries to fit in: more formal than a Wiki page, but more casual than a journal article.
There are plenty of interesting ideas that maybe do not warrant an academic publication — but that does not mean these ideas are not worth writing about! Communicating ideas to a wide audience is much more important than concealing them in some esoteric journal. Even if it has all been done before in the Journal of Impossibly Complicated Theoretical Stuff, explaining a neat idea about “warm fuzzy things” to the rest of us can still be plain fun.
The Monad.Reader is also a great place to write about a tool or application that deserves more attention. Most programmers do not enjoy writing manuals; writing a tutorial for The Monad.Reader, however, is an excellent way to put your code in the limelight and reach hundreds of potential users.
Since the last HCAR, I have moved a lot of old articles from the old MoinMoin wiki to the new MediaWiki wiki. Unfortunately, I do not have the time to reformat all the old articles. If you fancy a go at tidying an article or two, I would really appreciate your help!
I am always interested in new submissions, whether you are an established researcher or fledgling Haskell programmer. Check out the Monad.Reader homepage for all the information you need to start writing your article.
The Haskell Weekly News (HWN) is an irregular newsletter covering developments in Haskell. Content includes announcements of new projects, jobs, discussions from the various Haskell communities, notable project commit messages, Haskell in the blogspace, and more. The Haskell Weekly News also publishes latest releases uploaded to Hackage.
Planet Haskell is an aggregator of Haskell people’s blogs and other Haskell-related news sites. As of April 2008 content from 92 blogs and other sites is being republished in a common format.
A common misunderstanding about Planet Haskell is that it republishes only Haskell content. That is not its mission. A Planet shows what is happening in the community, what people are thinking about or doing. Thus Planets tend to contain a fair bit of “off-topic” material. Think of it as a feature, not a bug.
For information on how to get added to Planet, please read http://planet.haskell.org/policy.html.
Haskell is one of the leading languages for teaching functional programming, enabling students to write simpler and cleaner code, and to learn how to structure and reason about programs. This introduction is ideal for beginners: it requires no previous programming experience and all concepts are explained from first principles via carefully chosen examples. Each chapter includes exercises that range from the straightforward to extended projects, plus suggestions for further reading on more advanced topics. The presentation is clear and simple, and benefits from having been refined and class-tested over several years.
Features include: freely accessible powerpoint slides for each chapter; solutions to exercises, and examination questions (with solutions) available to instructors; downloadable code that is fully compliant with the latest Haskell release.
|Report by:||Bryan O’Sullivan|
|Participants:||John Goerzen, Don Stewart|
We are working on a book, “Real World Haskell”, about the practical application of Haskell to everyday programming problems. The book will be published in the second half of 2008 by O’Reilly.
Our intended audience is programmers with no background in functional languages. We explore a diverse set of topics, among which are the following.
At the time of writing (late April, 2008) we have first drafts of about 75 per cent of the book written. We expect it to come to about 30 chapters in total.
We are excited to be publishing the book under a Creative Commons License. As we write chapters, we publish them online for people to read and comment on, and we incorporate feedback from our readers when we rewrite drafts of chapters. The level of community interest has been remarkable: so far, we have received over 4,000 comments on the chapters we have made available.
|Participants:||Eric Kow, David House, Joeri van Eekelen, and other contributors|
The goal of the Haskell wikibook project is to build a community textbook about Haskell that is at once free (as in freedom and in beer), gentle, and comprehensive. We think that the many marvellous ideas of lazy functional programming can and thus should be accessible to everyone in a central place.
Since the last report, the wikibook has been advancing rather slowly. The rewrite of the Monad chapters is still in progress and material about lazy evaluation is still being written. Of course, additional authors and contributors that help writing new contents or simply spot mistakes and ask those questions we had never thought of are more than welcome!
Most of the original GTK+2.0 tutorial by Tony Gail and Ian Main has been adapted to Gtk2Hs (→5.8.1), which is the Haskell binding to the GTK GUI library.
The Gtk2Hs tutorial also builds on “Programming with gtkmm” by Murray Cumming et al. and the Inti (Integrated Foundation Classes) tutorial by the Inti team.
The Gtk2Hs tutorial assumes intermediate level Haskell programming skills, but no prior GUI programming experience.
It has been translated into Spanish, by Laszlo Keuschnig, and both versions are available on Haskell darcs.
|3.1 Packing Widgets|
|3.2 Packing Demonstration Program|
|3.3 Packing Using Tables|
|4.1 The Button Widget|
|4.2 Adjustments, Scale, and Range|
|4.4 Arrows and Tooltips|
|4.5 Dialogs, Stock Items, and Progress Bars|
|4.6 Text Entries and Status Bars|
|4.7 Spin Buttons|
|5.2 File Selection|
|5.3 Font and Colour Selection|
|6.1 Scrolled Windows|
|6.2 EventBoxes and ButtonBoxes|
|6.3 The Layout Container|
|6.4 Paned Windows and Aspect Frames|
|7.||Action Based Widgets|
|7.1 Menus and Toolbars|
|7.2 Popup Menus, Radio Actions,|
|and Toggle Actions|
|Appendix: Drawing with Cairo: Getting Started|
The Glade tutorial, an introduction to visual Gtk2Hs programming, has been updated to Glade 3 by Alex Tarkovsky. It is available on: http://haskell.org/gtk2hs/docs/tutorial/glade/ This tutorial has also been translated into Spanish, by Laszlo Keuschnig, but it is currently only available on: http://home.telfort.nl/sp969709/glade/es-index.html
The collection of various Haskell mini tutorials and assorted small projects (http://okmij.org/ftp/Haskell/) has received three additions:
With Chung-chieh Shan we introduce a type-level Haskell library for arbitrary precision binary arithmetic over natural kinds. The numerals are specified in the familiar big-endian bit notation. The library supports addition/subtraction, predecessor/successor, multiplication/division, exp2, all comparisons, GCD, and the maximum. At the core of the library are multi-mode ternary relations Add and Mul where any two arguments determine the third. Such relations are especially suitable for specifying static arithmetic constraints on computations. The type-level numerals have no run-time representation; correspondingly, all arithmetic operations are done at compile time and have no effect on run-time.
Two applications of the library for safe system programming are described in the next item.
We argue that Haskell as supported by GHC today can be used for safe system programming, statically assuring safe handling of raw memory pointers, device registers, and other low-level resources. Safety is assured by the type system and has no run-time overhead. We demonstrate two extensive examples: (i) one built around raw pointers, to track and arbitrate the size, alignment, write permission, and other properties of memory areas in the presence of indexing, casting, and iteration; (ii) the other built around a device register, to enforce protocol and timing requirements while reading from the register.
We also demonstrate custom kinds and predicates; type-level numbers, functions, and records; and mixed type- and term-level programming.
There have been several proposals for Haskell extensions to support open polymorphic variants, i.e., extensible recursive open sum datatypes similar to polymorphic variants of OCaml. We demonstrate that Haskell as it is — the HList library (→5.5.5) — already supports polymorphic variants, with automatic variant subtyping. HList thus solves the familiar (in OO, at least) “expression problem” — the ability to add new alternatives to a datatype and extend old processing functions to deal with the extended variant, maximally reusing old code without changing it.
Our polymorphic variants are literally open co-products: dual of, literally negated extensible polymorphic records of HList. Our encoding of sums is the straightforward Curry-Howard image of the DeMorgan law of the negation of disjunction.
Our implementation of polymorphic variants in terms of HList records uses no type classes, no type-level programming or any other type hacking. In fact, there are no type annotations, type declarations or any other mentioning of types, except in the comments. The code is included in the HList library.
|Report by:||Simon Peyton Jones|
|Participants:||Tim Chevalier, Aaron Tomb, Roman Leshchinskiy, Gabrielle Keller, Max Bolingbroke, John Dias, Thomas Schilling, and many others|
The last six months have been a time of consolidation for GHC. We have done many of the things described in the last HCAR, but there are few new headline items to report, so this status report is briefer than usual.
We have been working hard on Data Parallel Haskell, especially Roman Leshchinskiy and Gabriele Keller. It has turned out be be hard to get the entire transformation and optimisation stack to work smoothly, and we have not made progress announcements because we do not want to yell about it until it Actually Works. But it is the biggest single GHC focus: Roman works on it full time.
Large parts of the major pieces are in place. GHC contains a shiny new vectoriser that turns scalar into data-parallel functions. Moreover, the sequential and parallel array libraries targeted by the vectoriser have been steadily growing. We managed to successfully run small applications, such as an n-body simulator based on the Barnes-Hut algorithm, but the vectoriser and library are still awkward to use and need to be more robust before being useful to a wider audience. We also need to improve performance.
We expect to release a working version of Data Parallel Haskell as part of GHC 6.10 (see below).
We plan to release GHC 6.8.3 at the end of May 2008, with many bug-fixes but no new features.
We plan to release GHC 6.10 around the time of ICFP, with significant new features. The up-to-date list of new stuff is kept at http://hackage.haskell.org/trac/ghc/wiki/Status/Releases, but here’s a quick summary:
nhc98 is a small, easy to install, compiler for Haskell’98. nhc98 is still very much alive and working, although it does not see much new development these days. The last public release (1.20) was in November 2007, for compatibility with ghc-6.8.x. Continuing maintenance ensures that common library packages build in their most recent versions.
The York Haskell Compiler (yhc) is a fork of the nhc98 compiler (→2.2), with goals such as increased portability, platform independent bytecode, integrated Hat (→4.3.5) support, and generally being a cleaner code base to work with. Yhc now compiles and runs almost all Haskell 98 programs, has basic FFI support — the main thing missing is haskell.org base libraries, which is being worked on.
Helium is a compiler that supports a substantial subset of Haskell 98 (but, e.g., n+k patterns are missing). Type classes are restricted to a number of built-in type classes and all instances are derived. The advantage of Helium is that it generates novice friendly error feedback. The latest versions of the Helium compiler are available for download from the new website located at http://www.cs.uu.nl/wiki/Helium. This website also explains in detail what Helium is about, what it offers, and what we plan to do in the near and far future.
We are still working on making version 1.7 available, mainly a matter of updating the documentation and testing the system. Internally little has changed, but the interface to the system has been standardised, and the functionality of the interpreters has been improved and made consistent. We have made new options available (such as those that govern where programs are logged to). The use of Helium from the interpreters is now governed by a configuration file, which makes the use of Helium from the interpreters quite transparent for the programmer. It is also possible to use different versions of Helium side by side (motivated by the development of Neon (→5.2.7)).
|Report by:||Matthew Naylor|
|Participants:||Colin Runciman, Neil Mitchell|
The Reduceron is a prototype of a special-purpose graph reduction machine, built using an FPGA. It can access up to eight graph nodes in parallel on each of its stack, heap, and combinator memories. The goal so far has been to optimise function unfolding. Eight combinator nodes can be instantiated with eight stack elements and placed on the heap, all in a single cycle.
The Reduceron is a simple machine, containing just four instructions and a garbage collector, and executes core Haskell almost directly. The translator to bytecode and the FPGA machine are both implemented in Haskell, the latter using Lava.
See the URL below for details and results. Since the last HCAR, I have written a thesis chapter about it, with all the gory details unveiled! Further experiments are planned.
GHC version 6.8.2 has been in Gentoo since late last year, and is about to go stable. All of the 60+ Haskell libraries and tools work with it, too. There are also GHC binaries available for alpha, amd64, hppa, ia64, sparc, and x86.
Browse the packages in portage at http://packages.gentoo.org/category/dev-haskell?full_cat.
The GHC architecture/version matrix is available at http://packages.gentoo.org/package/dev-lang/ghc.
Please report problems in the normal Gentoo bug tracker at bugs.gentoo.org.
There is also a Haskell overlay providing another 200 packages. Thanks to the recent progress of Cabal and Hackage (→5.1), we have written a tool called “hackport” (initiated by Henning Günther) to generate Gentoo packages that rarely need much tweaking.
The overlay is available at http://haskell.org/haskellwiki/Gentoo. Using Darcs (→6.1.1), it is easy to keep updated and send patches. It is also available via the Gentoo overlay manager “layman”. If you choose to use the overlay, then problems should be reported on IRC (#gentoo-haskell on freenode), where we coordinate development, or via email <haskell at gentoo.org>.
Lately a few of our developers have shifted focus, and only a few developers remain. If you would like to help, which would include working on the Gentoo Haskell framework, hacking on hackport, writing ebuilds, and supporting users, please contact us on IRC or email as noted above.
Haskell support on OpenBSD is now taken over by Matthias Kilian, who has updated GHC and related tools for this platform. OpenBSD support for the GHC head branch continues.
Haskell Server Pages (HSP) is an extension of Haskell targeted at writing dynamic web pages. Key features and selling points include:
HSP is and will be continuously released onto Hackage. It consists of a series of interdependent packages with package hsp as the main top-level starting point, and package happs-hsp for integration with HAppS. The best way to keep up with development is to grab the darcs repositories, all located under http://code.haskell.org/HSP.
|Report by:||Phil Trinder|
|Participants:||Abyd Al Zain, Mustafa Aswad, Jost Berthold, Murray Gross, Kevin Hammond, Vladimir Janjic, Hans-Wolfgang Loidl, Greg Michaelson|
A complete, GHC-based implementation of the parallel Haskell extension GpH and of evaluation strategies is available. Extensions of the runtime-system and language to improve performance and support new platforms are under development.
In recent work we have demonstrated that SymGrid-Par is capable of exploiting a variety of modern parallel/multicore architectures without any change to the underlying CA components; and that SymGrid-Par is capable of orchestrating heterogeneous computations across a high-performance computational Grid.
The GUM implementation of GpH is available in two main development branches.
We are exploring new, prescient scheduling mechanisms for GpH.
Our main hardware platforms are Intel-based Beowulf clusters and multicores. Work on ports to other architectures is also moving on (and available on request):
<gph at macs.hw.ac.uk>, <mgross at dorsai.org>
Eden has been jointly developed by two groups at Philipps-Universität Marburg, Germany, and Universidad Complutense de Madrid, Spain. The project has been ongoing since 1996. Currently, the team consists of the following people:
Eden extends Haskell with a small set of syntactic constructs for explicit process specification and creation. While providing enough control to implement parallel algorithms efficiently, it frees the programmer from the tedious task of managing low-level details by introducing automatic communication (via head-strict lazy lists), synchronisation, and process handling.
Eden’s main constructs are process abstractions and process instantiations. The function process :: (a -> b) -> Process a b embeds a function of type (a -> b) into a process abstraction of type Process a b which, when instantiated, will be executed in parallel. Process instantiation is expressed by the predefined infix operator ( # ) :: Process a b -> a -> b. Higher-level coordination is achieved by defining skeletons, ranging from a simple parallel map to sophisticated replicated-worker schemes. They have been used to parallelise a set of non-trivial benchmark programs.
Rita Loogen, Yolanda Ortega-Mallén, and Ricardo Peña: Parallel Functional Programming in Eden, Journal of Functional Programming 15(3), 2005, pages 431–475.
A major revision of the parallel Eden runtime environment for GHC 6.8.1 is available on request. Support for Glasgow parallel Haskell (→3.1.2) is currently being added to this version of the runtime environment. It is planned for the future to maintain a common parallel runtime environment for Eden, GpH, and other parallel Haskells.
XHaskell is an extension of Haskell which combines parametric polymorphism, algebraic data types, and type classes with XDuce style regular expression types, subtyping, and regular expression pattern matching. The latest version can be downloaded via http://code.google.com/p/xhaskell/
We have fully implemented the system, which can be used in combination with the Glasgow Haskell Compiler. We have taken care to provide meaningful type error messages in case the static checking of programs fails. Our system also allows to defer some static checks until run-time.
We make use of GHC-as-a-library so that the XHaskell programmer can easily integrate her programs into existing applications and take advantage of the many libraries available in GHC. We also provide a convenient interface to the HaXML (→5.10.3) parser.
Kenny’s thesis will be available shortly, describing in detail the formal underpinnings behind XHaskell.
In this project, we extend Haskell with Erlang-style actors and Join-calculus style concurrency primitives. The HaskellJoin extension is described in an IFL’07 paper. See for details: http://taichi.ddns.comp.nus.edu.sg/taichiwiki/HaskellJoinRules.
The HaskellActor extension is described in a forthcoming COORDINATION’08 paper. The implementation can be downloaded via http://code.google.com/p/haskellactor/
We are currently working on revising the HaskellJoin implementation.
|Report by:||Jan Christiansen|
|Participants:||Bernd Braßel, Michael Hanus, Wolfgang Lux, Sebastian Fischer, and others|
Curry is a functional logic programming language with Haskell syntax. In addition to the standard features of functional programming like higher-order functions and lazy evaluation, Curry supports features known from logic programming. This includes programming with non-determinism, free variables, constraints, declarative concurrency, and the search for solutions. Although Haskell and Curry share the same syntax, there is one main difference with respect to how function declarations are interpreted. In Haskell the order in which different rules are given in the source program has an effect on their meaning. In Curry, in contrast, the rules are interpreted as equations, and overlapping rules induce a non-deterministic choice and a search over the resulting alternatives. Furthermore, Curry allows to call functions with free variables as arguments so that they are bound to those values that are demanded for evaluation, thus providing for function inversion.
There are three major implementations of Curry. While the original implementation PAKCS (Portland Aachen Kiel Curry System) compiles to Prolog, MCC (Münster Curry Compiler) generates native code via a standard C compiler. The Kiel Curry System (KiCS) compiles Curry to Haskell aiming to provide nearly as good performance for the purely functional part as modern compilers for Haskell do. From these implementations only MCC will provide type classes in the near future. Type classes are not part of the current definition of Curry, though there is no conceptual conflict with the logic extensions.
There have been research activities in the area of functional logic programming languages for more than a decade. Nevertheless, there are still a lot of interesting research topics regarding more efficient compilation techniques and even semantic questions in the area of language extensions like encapsulation and function patterns. Besides activities regarding the language itself, there is also an active development of tools concerning Curry (e.g., the documentation tool CurryDoc, the analysis environment CurryBrowser, the observation debuggers COOSy and iCODE, the debugger B.I.O. (http://www-ps.informatik.uni-kiel.de/currywiki/tools/oracle_debugger), EasyCheck (→4.3.3), and CyCoTest (→4.3.4)). Because Curry has a functional subset, these tools can canonically be transferred to the functional world.
Do you crave for highly expressive types, but do not want to resort to type-class hackery? Then Agda might provide a view of what the future has in store for you.
Agda is a dependently typed functional programming language (developed using Haskell). The language has inductive families, i.e., GADTs which can be indexed by values and not just types. Other goodies include parameterised modules, mixfix operators, and an interactive Emacs interface (the type checker can assist you in the development of your code).
A lot of work remains in order for Agda to become a full-fledged programming language (effects, good libraries, mature compilers, documentation, etc.), but already in its current state it can provide lots of fun as a platform for experiments in dependently typed programming.
New since last time:
The Agda Wiki: http://www.cs.chalmers.se/~ulfn/Agda/
Epigram is a prototype dependently typed functional programming language, equipped with an interactive editing and typechecking environment. High-level Epigram source code elaborates into a dependent type theory based on Zhaohui Luo’s UTT. The definition of Epigram, together with its elaboration rules, may be found in “The view from the left” by Conor McBride and James McKinna (JFP 14 (1)).
A new version, Epigram 2, based on Observational Type Theory (see “Observational Equality, Now!” by Thorsten Altenkirch, Conor McBride, and Wouter Swierstra) is in preparation.
Simply typed languages have the property that any subexpression of a well typed program may be replaced by another of the same type. Such type systems may guarantee that your program will not crash your computer, but the simple fact that True and False are always interchangeable inhibits the expression of stronger guarantees. Epigram is an experiment in freedom from this compulsory ignorance.
Specifically, Epigram is designed to support programming with inductive datatype families indexed by data. Examples include matrices indexed by their dimensions, expressions indexed by their types, search trees indexed by their bounds. In many ways, these datatype families are the progenitors of Haskell’s GADTs, but indexing by data provides both a conceptual simplification — the dimensions of a matrix are numbers — and a new way to allow data to stand as evidence for the properties of other data. It is no good representing sorted lists if comparison does not produce evidence of ordering. It is no good writing a type-safe interpreter if one’s typechecking algorithm cannot produce well-typed terms.
Programming with evidence lies at the heart of Epigram’s design. Epigram generalises constructor pattern matching by allowing types resembling induction principles to express as how the inspection of data may affect both the flow of control at run time and the text and type of the program in the editor. Epigram extracts patterns from induction principles and induction principles from inductive datatype families.
James McKinna and Conor McBride designed Epigram in 2001, whilst based at Durham, working with Zhaohui Luo and Paul Callaghan. McBride’s prototype implementation of the language, “Epigram 1” emerged in 2004: it is implemented in Haskell, interfacing with the xemacs editor. This implementation effort involved inventing a number of new programming techniques which have found their way into the Haskell community at large: central components of Control.Applicative and Data.Traversable started life in the source code for Epigram.
Following the Durham diaspora, James McKinna and Edwin Brady went to St. Andrews, where they continued their work on phase analysis and efficient compilation of dependently typed programs. More recently, with Kevin Hammond, they have been studying applications of dependent types to resource-aware computation in general, and network protocols in particular.
Meanwhile, Conor McBride went to Nottingham to work with Thorsten Altenkirch. They set about redesigning Epigram’s underlying type theory, radically changing its treatment of logical propositions in general, and equality in particular, making significant progress on problems which have beset dependent type theories for decades.
The Nottingham duo grew into a strong team of enthusiastic researchers. Peter Morris successfully completed a PhD on generic programming in Epigram and is now a research assistant: his work has led to the redesign of Epigram’s datatype language. Nicolas Oury joined from Paris as a postdoctoral research fellow, and is now deeply involved in all aspects of design and implementation. PhD students James Chapman and Wouter Swierstra are working on Epigram-related topics, studying formalised metatheory and effectful programming, respectively. Meanwhile, Nottingham research on containers, involving Neil Ghani, Peter Hancock, and Rawle Prince, together with the Epigram team, continues to inform design choices as the language evolves.
Epigram 1 was used successfully by Thorsten Altenkirch, Conor McBride, and Peter Hancock in an undergraduate course on Computer Aided Formal Reasoning http://www.e-pig.org/darcs/g5bcfr/. It has also been used in a number of graduate-level courses.
James McKinna is now at Radboud University, Nijmegen; Edwin Brady is still at St. Andrews; Thorsten Altenkirch, Peter Morris, Nicolas Oury, James Chapman, and Wouter Swierstra are still in Nottingham; Conor McBride has left academia. All are still contributing to the Epigram project.
Epigram 2 is based on a radical redesign of our underlying type theory. The main novelties are
Nicolas Oury, Peter Morris, and Conor McBride have implemented this theory, together with a system supporting interactive construction (and destruction) within it. This the engine which will drive Epigram 2: we plan to equip it with human-accessible controls and release it for the benefit of the curious, shortly. With this in place, we shall reconstruct the Epigram source language and its elaboration mechanism: constructs in source become constructions in the core.
There is still a great deal of work to do. We need to incorporate the work from Edwin Brady and James McKinna on type erasure and efficient compilation; we need to bring out and exploit the container structure of data; we need to support programming with effects (including non-termination); we need a declarative proof language, as well as a functional programming language.
The Epigram project relies on Haskell, its libraries, and tools such as Alex (→4.1.1), Happy (→4.1.2), bnfc, Cabal (→5.1), and Darcs (→6.1.1). We have recently developed tools for assembling the modules corresponding to each component of the Epigram system from files corresponding to each feature of the Epigram language: this may prove useful to others, so we hope to clean them up and release them. Meanwhile, as Haskell itself edges ever closer to dependent types, the Epigram project has ever more to contribute, in exploration of the design space, in the development of implementation technique, and in experimentation with the pragmatics of programming with such power and precision.
Epigram source code and related research papers can be found on the web at http://www.e-pig.org and its community of experimental users communicates via the mailing list <epigram at durham.ac.uk>. The current, rapidly evolving state of Epigram 2 can be found at http://www.e-pig.org/epilogue/.
|Report by:||Edsko de Vries|
|Participants:||Rinus Plasmeijer, David M Abrahamson|
An important feature of pure functional programming languages is referential transparency. A consequence of referential transparency is that functions cannot be allowed to modify their arguments, unless it can be guaranteed that they have the sole reference to that argument. This is the basis of uniqueness typing.
We have been developing a uniqueness type system based on that of the language Clean but with various improvements: no subtyping is required, the type language does not include inequality constraints (types in Clean often involve implications between uniqueness attributes), and types and uniqueness attributes are both considered types (albeit of different kinds). This makes the type system sufficiently similar to standard Hindley/Milner type systems that (1) standard inference algorithms can be applied, and (2) modern extensions such as arbitrary rank types and generalised algebraic data types (GADTs) can easily be incorporated.
Although our type system is developed in the context of the language Clean, it is also relevant to Haskell because the core uniqueness type system we propose is very similar to Haskell’s core type system.
|Report by:||Janis Voigtländer|
|Participants:||Sascha Böhme, Florian Stenger|
Free theorems are statements about program behaviour derived from (polymorphic) types. Their origin is the polymorphic lambda-calculus, but they have also been applied to programs in more realistic languages like Haskell. Since there is a semantic gap between the original calculus and modern functional languages, the underlying theory (of relational parametricity) needs to be refined and extended. We aim to provide such new theoretical foundations, as well as to apply the theoretical results to practical problems. Recent papers concerning program transformations are “Semantics and Pragmatics of New Shortcut Fusion Rules” (FLOPS’08) and “Asymptotic Improvement of Computations over Free Monads” (MPC’08).
Also on the practical side, we maintain a library and tools for generating free theorems from Haskell types, originally implemented by Sascha Böhme. Both the library and a shell-based tool are now available from Hackage (as free-theorems-0.2 and ftshell-0.2, respectively). There is also a web-based tool at http://linux.tcs.inf.tu-dresden.de/~voigt/ft. General features include:
Alex is a lexical analyser generator for Haskell, similar to the tool lex for C. Alex takes a specification of a lexical syntax written in terms of regular expressions, and emits code in Haskell to parse that syntax. A lexical analyser generator is often used in conjunction with a parser generator, such as Happy (→4.1.2), to build a complete parser.
The latest release is version 2.2, released November 2007. Alex is in maintenance mode, we do not anticipate any major changes in the near future.
Changes in version 2.2:
Happy is a tool for generating Haskell parser code from a BNF specification, similar to the tool Yacc for C. Happy also includes the ability to generate a GLR parser (arbitrary LR for ambiguous grammars).
The latest release is 1.17, released 22 October 2007.
Changes in version 1.17:
|Report by:||Arie Middelkoop|
|Participants:||ST Group of Utrecht University|
UUAG is the Utrecht University Attribute Grammar system. It is a preprocessor for Haskell which makes it easy to write catamorphisms (that is, functions that do to any datatype what foldr does to lists). You can define tree walks using the intuitive concepts of inherited and synthesised attributes, while keeping the full expressive power of Haskell. The generated tree walks are efficient in both space and time.
New features are support for polymorphic abstract syntax and higher-order attributes. With polymorphic abstract syntax, the type of certain terminals can be parameterised. Higher-order attributes are useful to incorporate computed values as subtrees in the AST.
The system is in use by a variety of large and small projects, such as the Haskell compiler EHC, the editor Proxima for structured documents, the Helium compiler (→2.4), the Generic Haskell compiler, and UUAG itself. The current version is 0.9.6 (April 2008), is extensively tested, and is available on Hackage.
We are currently improving the documentation, and plan to introduce an alternative syntax that is closer to the Haskell syntax.
Haddock is a widely used documentation-generation tool for Haskell library code. Haddock generates documentation by parsing the Haskell source code directly and including documentation supplied by the programmer in the form of specially-formatted comments in the source code itself. Haddock has direct support in Cabal (→5.1), and is used to generate the documentation for the hierarchical libraries that come with GHC, Hugs, and nhc98 (http://www.haskell.org/ghc/docs/latest/html/libraries).
The latest release is version 2.1.0, released May 1 2008.
Changes since the 0.9 release:
Changes since the 0.8 release:
Currently, Haddock ignores comments on some language constructs like GADTs and Associated Type synonyms. Of course, the plan is to support comments for these constructs in the future. Haddock is also slightly more picky on where to put comments compared to the 0.x series. We want to fix this as well. Both of these plans require changes to the GHC parser. We want to investigate to what degree it is possible to decouple comment parsing from GHC and move it into Haddock, to not be bound by GHC releases.
This tool by Ralf Hinze and Andres Löh is a preprocessor that transforms literate Haskell code into LaTeX documents. The output is highly customisable by means of formatting directives that are interpreted by lhs2TeX. Other directives allow the selective inclusion of program fragments, so that multiple versions of a program and/or document can be produced from a common source. The input is parsed using a liberal parser that can interpret many languages with a Haskell-like syntax, and does not restrict the user to Haskell 98.
The program is stable and can take on large documents.
Since the last report, version 1.13 has been released. It is compatible with GHC 6.8 and Cabal 1.2, but not yet with development versions of Cabal. Maintenance will continue in the future, releases will appear as needed. No major development is planned at the moment, although I have some vague ideas for substantial improvement.
SmallCheck is a one-module lightweight testing library. It adapts QuickCheck’s ideas of type-based generators for test data and a class of testable properties. But instead of testing a sample of randomly generated values, it tests properties for all the finitely many values up to some depth, progressively increasing the depth used. Among other advantages, generators for user-defined types can follow a simple pattern and are automatically derivable.
The two significant developments in Version 0.3 are (1) new variants of existential quantifiers requiring uniqueness — two witnesses are reported when uniqueness fails, and (2) a new method for generating functions with functional arguments — avoiding previous over-generation in some cases. There are other more minor improvements.
SmallCheck is freely available for downloading from http://www.cs.york.ac.uk/fp/smallcheck0.3.tar.
For the first-order universally quantified subset of SmallCheck properties, the pruning principles implemented in Lazy SmallCheck (→4.3.2) often allow deeper testing at the same computational cost. A likely next development is a fuller combination of techniques from these two testing libraries.
|Report by:||Matthew Naylor|
|Participants:||Fredrik Lindblad, Colin Runciman|
Lazy SmallCheck is a library for testing program properties. Unlike QuickCheck and SmallCheck (→4.3.1), it generates partially-defined inputs that are progressively refined as demanded by the property under test. The key observation is that if a property evaluates to True or False for a partially-defined input then it would also do so for all refinements of that input. By not generating such refinements, Lazy SmallCheck may test the same input-space as SmallCheck using significantly fewer tests.
A talk about Lazy SmallCheck was given at Fun in the Afternoon York, and the slides are available, along with an initial implementation, at the URL below. Since the last HCAR, we have made a simpler implementation, now supporting Fredrik’s parallel conjunction operator. The problem with a sequential conjunction is that, when applied to a partially-defined input, it will crash (i.e., demand more input than is given) if the first conjunct crashes, even if the second conjunct is falsified. A parallel conjunction, in contrast, is falsified if any conjunct is falsified, even if the other conjucts crash. This increases pruning opportunities, and “when used, one is not obliged to tweak the order of conjuncts in the property” (Fredrik Lindblad, TFP 2007).
Support for existential quantifiers and function generation a la SmallCheck is under consideration. A new release will hopefully be made sometime during the year.
EasyCheck is an automatic test tool like QuickCheck or SmallCheck (→4.3.1). It is implemented in the functional logic programming language Curry (→3.2.1). Although simple test cases can be generated from nothing but type information in all mentioned test tools, users have the possibility to define custom test-case generators — and make frequent use of this possibility. Nondeterminism — the main extension of functional-logic programming over Haskell — is an elegant concept to describe such generators. Therefore it is easier to define custom test-case generators in EasyCheck than in other test tools. If no custom generator is provided, test cases are generated by a free variable which non-deterministically yields all values of a type. Moreover, in EasyCheck, the enumeration strategy is independent of the definition of test-case generators. Unlike QuickCheck’s strategy, it is complete, i.e., every specified value is eventually enumerated if enough test cases are processed, and no value is enumerated twice. SmallCheck also uses a complete strategy (breadth-first search) which EasyCheck improves w.r.t. the size of the generated test data. EasyCheck is distributed with the Kiel Curry System (KiCS).
The Curry Coverage Tester CyCoTest (pronounced like psycho test) aims at testing declarative programs to the bone. Unlike black-box test tools like QuickCheck, it does not generate test cases from type information or additional specifications. It rather uses the demand of the program under test to narrow test cases lazily. Narrowing is a generalisation of reduction that allows to compute with partial information. Evaluating a program with narrowing and initially uninstantiated input binds the input as much as demanded by the computation and non-deterministically computes a corresponding result for each binding. The generated pairs of in- and output form a set of test cases that reflects the demand of the tested program.
The generated set of test cases can either be checked by hand or using properties, i.e., functions with a Boolean result. Using properties is convenient, but sometimes it is hard to come up with a complete formal specification of the tested program. Hence, errors might remain undetected if an incomplete property is used to evaluate the test cases. In order to lower the burden of manual checking, we employ control- and data-flow coverage information to minimise the set of generated test cases. Test cases that do not cause new code coverage are considered redundant and need not be shown to the user. Although this bears the risk of eliminating test cases that expose a bug, experiments indicate that the employed coverage criteria suffice to expose bugs in practice.
CyCoTest is implemented in and for the functional logic programming language Curry (→3.2.1), which provides narrowing for free. A Haskell implementation would be possible using ideas from the Kiel Curry System (KiCS), which translates Curry programs into Haskell programs.
The Haskell tracing system Hat is based on the idea that a specially compiled Haskell program generates a trace file alongside its computation. This trace can be viewed in various ways with several tools. Some views are similar to classical debuggers for imperative languages, some are specific to lazy functional language features or particular types of bugs. All tools inter-operate and use a similar command syntax.
Hat can be used both with nhc98 (→2.2) and GHC (→2.1). Hat was built for tracing Haskell 98 programs, but it also supports some language extensions (FFI, MPTC, fundeps, hierarchical libs). A tutorial explains how to generate traces, how to explore them, and how they help to debug Haskell programs.
During the last half year only small bug fixes were committed to the Darcs repository, but several other updates are also planned for the near future, including new and improved trace-browsers. A recent student project completed a Java-GUI viewer for traces, based on the idea of timelines and search. We hope this can be added to the repository soon.
Hoogle is an online Haskell API search engine. It searches the functions in the various libraries, both by name and by type signature. When searching by name, the search just finds functions which contain that name as a substring. However, when searching by types it attempts to find any functions that might be appropriate, including argument reordering and missing arguments. The tool is written in Haskell, and the source code is available online. Hoogle is available as a web interface, a command line tool, and a lambdabot (→6.8.1) plugin.
The development of Hoogle has been slow lately, due to the author writing a PhD thesis. However, this summer Hoogle will become a full-time project with funding from the Google Summer of Code. Expect to see Hoogle v4.0 before the summer is over.
Leksah is a Haskell IDE written in Haskell based on Gtk+ and gtk2hs (→5.8.1). Leksah is a practical tool to support the Haskell development process. It is platform independent and should run on any platform where GTK+, gtk2hs, and GHC can be installed. (It is currently being tested on Windows and Linux but it should work on the Mac. It only works with GHC.)
There are compelling reasons for a Haskell IDE written in Haskell. First and most importantly, Haskell is different from mainstream imperative and object oriented languages and a dedicated IDE may exploit this specialness. Second the integration with an existing tool written in a different language has to solve the problem of integration of different programming languages/paradigms.
Currently Leksah offers features like jumping to definition for a name, integration of Cabal (→5.1) for building, Haskell source editor with “source candy”, configurable keymaps, … This list will (hopefully) expand quickly.
The development of Leksah started in June 2007 and the first alpha version was released February 2008. Contributions of all kind are welcome.
The Eclipse platform is an extremely extensible framework for IDEs, developed by an Open Source Project. Our project extends it with tools to support Haskell development.
The aim is to develop an IDE for Haskell that provides the set of features and the user experience known from the Eclipse Java IDE (the flagship of the Eclipse project), and integrates a broad range of Haskell development tools. Long-term goals include support for language-aware IDE features, like refactoring and structural search.
Over the past year, a new subproject called Cohatoe has developed a framework that allows us to implement Eclipse Plugins partly in Haskell. We are currently re-implementing and extending EclipseFP functionality in Haskell, using libraries such as Cabal (→5.1) and the GHC API (→2.1). The goal is to release a new version, EclipseFP 2, this summer.
Yi is a project to write a Haskell-extensible editor. Yi is structured around a purely functional editor core, such that most components of the editor can be overridden by the user, using configuration files written in Haskell.
Yi has been converted to the Cabal build system, which makes it easier to build and experiment with.
|Report by:||Chris Brown|
|Participants:||Huiqing Li, Claus Reinke, Simon Thompson|
Refactorings are source-to-source program transformations which change program structure and organisation, but not program functionality. Documented in catalogues and supported by tools, refactoring provides the means to adapt and improve the design of existing code, and has thus enabled the trend towards modern agile software development processes.
Our project, Refactoring Functional Programs, has as its major goal to build a tool to support refactorings in Haskell. The HaRe tool is now in its fourth major release. HaRe supports full Haskell 98, and is integrated with Emacs (and XEmacs) and Vim. All the refactorings that HaRe supports, including renaming, scope change, generalisation, and a number of others, are module aware, so that a change will be reflected in all the modules in a project, rather than just in the module where the change is initiated. The system also contains a set of data-oriented refactorings which together transform a concrete data type and associated uses of pattern matching into an abstract type and calls to assorted functions. The latest snapshots support the hierarchical modules extension, but only small parts of the hierarchical libraries, unfortunately. An informal release of HaRe 0.4 that was recently released works with GHC 6.6.1 and GHC 6.8.2, but not GHC 6.4; earlier releases work with 6.4.*.
In order to allow users to extend HaRe themselves, HaRe includes an API for users to define their own program transformations, together with Haddock (→4.2.1) documentation. Please let us know if you are using the API.
There have been some recent developments for adding program slicing techniques to HaRe. These techniques include a refactoring to split functions returning tuples into separate definitions, and to also put them back together again. A number of new data-type based and structural refactorings have been added to HaRe. Some of these refactorings make use of the GHC type checker to make the refactorings type-aware. These new refactorings include: adding and removing a constructor; adding and removing a field; and introduction of pattern matches and case analysis. Structural refactorings include: Conversion between let and where, and folding and unfolding of as-patterns.
A snapshot of HaRe is available from our webpage, as are recent presentations from the group (including LDTA 05, TFP05, SCAM06), and an overview of recent work from staff, students, and interns. Among this is an evaluation of what is required to port the HaRe system to the GHC API (→2.1), and a comparative study of refactoring Haskell and Erlang programs.
The final report for the project appears there, too, together with an updated refactoring catalogue and the latest snapshot of the system. Huiqing’s PhD thesis on refactoring Haskell programs is now available online from our project webpage.
|Report by:||Claus Reinke|
|Participants:||Haskell &Vim users|
My Haskell mode plugins for Vim seem to have become quite popular. They collect several scripts that offer functionality based on GHCi, on Haddock-generated documentation (→4.2.1), and on Vim’s own configurable program editing support. This includes several insert mode completions (based on identifiers available via currently imported modules, on identifiers appearing in the central Haddock indices, on tag files, or on words appearing in current and imported sources), quickfix mode (call compiler, list errors, jump to error locations), inferred type tooltips, various editing helpers (insert import statement, type declaration or module qualifier for id under cursor, expand implicit into explicit import statement, add option and language pragmas, …), and direct access to the Haddocks for the id under cursor.
A very incomplete screenshot tour of Vim’s IDE functions, as instantiated for Haskell, provides an overview of what is available (for more general information, see Vim’s excellent built-in :help, or browse the help files online at http://vimdoc.sourceforge.net/htmldoc/usr_toc.html; for more and current details of Haskell mode features, see the haskellmode.txt help file at the project site).
Both alternative and complementary Haskell-related plugins for Vim exist — please add links to your own tricks and tips at haskell.org (syntax-colouring works out of the box, other scripts deal with indentation, …). I hope these plugins might be useful to some of you (please let me know if anything does not work as advertised!), and might even motivate some of you to give Vim a try. It is really not as if Vim (or Emacs, for that matter) did not have more IDE functionality than most of us ever use, it is more that there is so much of it to learn and to fine-tune to your personal preferences.
The haskellmode plugins for Vim are currently in maintenance mode, with infrequent updates and bug fixes, and the occasional new feature (CamelCase-based abbreviations for insert-mode completion the most recent example).
No matter how fast GHCi keeps improving, users still keep suggesting new commands for it, and with the increasing amounts of information available in GHCi, it can sometimes be difficult to find the interesting bits, either in the documentation, or indeed in command outputs. The usual approach is to add feature requests to the ticket tracker and hope that someone will get round to implementing them, or to get the GHC sources and start contributing code.
Quite often, however, users ask for GHCi features that they could (relatively) easily define themselves, using some of the less well known features of GHCi.
In an email to the haskell-cafe last September, I gave one demonstration of this approach in the form of a mini-tutorial, starting with simple things like platform-independent :pwd/:ls, then laying the groundwork for more complex commands by defining :redir <var> <cmd>, a command that redirects the output of <cmd>, binding it to variable <var>. Based on :redir, we can then define :grep <pat> <cmd>, to filter the output of <cmd> for a pattern <pat> (think of finding the help entries related to breakpoints, or the variants of fold appearing in :browse Prelude). Taking some examples from the GHCi ticket tracker and from Hugs’ commands, there is also :find <id> (open the source for the definition of <id>), :b (:browse first module listed in :show modules), and :le <mod> (load module <mod>, edit location of first error, if any).
The commands are self-documenting, can be listed and removed as a group, and should give a good starting point for your own experiments with GHCi’s :def. Since the email, which targeted GHC 6.6 and later, a version for GHC 6.4.1 was added for those who needed to work with an old installation (the commands in that version differ slightly, to account for 6.4.1’s limitations). Please let me know if you find this useful, and remember to share your own GHCi tricks and tips!
You can find (and contribute!) other suggestions for .ghci files on this Haskell wiki page: http://haskell.org/haskellwiki/GHC/GHCi.
DarcsWatch is a tool to track the state of Darcs (→6.1.1) patches that have been submitted to some project, usually by using the darcs send command. It allows both submitters and project maintainers to get an overview of patches that have been submitted but not yet applied. Some notable features are:
Cpphs is a robust drop-in Haskell replacement for the C pre-processor. It has a couple of benefits over the traditional cpp — you can run it in Hugs when no C compiler is available (e.g., on Windows); and it understands the lexical syntax of Haskell, so you do not get tripped up by C-comments, line-continuation characters, primed identifiers, and so on. (There is also a pure text mode which assumes neither Haskell nor C syntax, for even greater flexibility.)
Cpphs can also unliterate .lhs files during preprocessing, and you can install it as a library to call from your own code, in addition to the stand-alone utility.
Current release is 1.4: there have been no changes in the last six months, indicating (one hopes) that it is now relatively bug-free.
The Haskell Cabal is a Common Architecture for Building Applications and Libraries. It is an API distributed with GHC (→2.1), nhc98 (→2.2), and Hugs which allows a developer to easily build and distribute packages.
Hackage (Haskell Package Database) is an online database of packages which can be interactively queried via the website and client-side software such as cabal-install. From Hackage, an end-user can download and install Cabal packages.
We are preparing for a release of Cabal-1.4 and the cabal-install tool. It is expected that all packages that worked with Cabal-1.2 will work with Cabal-1.4. The Cabal-1.4 release will include a large number of incremental improvements and bug fixes. The major feature for the cabal-install release is that it can serve as the primary command line front end to the Cabal/Hackage system. There will be no need to use the runhaskell Setup.hs interface any more.
The last year has seen Hackage take off. It has grown from a handful of packages to nearly 600 (with over 1200 releases). The Hackage website has seen a number of improvements including reporting which versions of ghc each package builds with, who uploaded each package, and proper support for Unicode in package meta-data.
Hackage also now applies somewhat stricter rules about package uploads. Uploading the same version of a package more than once is no longer allowed because we expect package content to be stable. It also checks for some common problems in package descriptions and will warn or reject as appropriate. You can run the same checks locally using cabal check.
We are very lucky to have got two Google Summer of Code projects related to Cabal and Hackage. Andrea Vezzosi is doing a project to build a “make-like” dependency framework for the Cabal library. This will enable Cabal to do proper dependency based rebuilds and support pre-processors like c2hs correctly. It also leads the way towards not depending on ghc –make for building Haskell code and to do parallel builds.
Neil Mitchell is working on Hoogle 4 (→4.4.1), which we aim to use as the primary search interface on the Hackage website. The aim is to be able to search the content of every package; not just the API but also the documentation and package meta-data.
There is huge potential for Hackage to help us manage and improve the community’s package collection. We are nearing the point where cabal-install will be able to report build results to the Hackage server. This should provide us with a huge amount of data on which packages work in which environments and configurations. More generally there is the opportunity to collect all sorts of useful information on the quality of packages. Hopefully we can evolve Hackage and associated clients into the kind of infrastructure we need to assemble collections of packages into high quality “batteries included”-style Haskell platform releases.
To help us in the next round of development work it would be enormously helpful to know from our users what their most pressing problems are with Cabal and Hackage. You probably have a favourite Cabal bug or limitation. Take a look at our bug tracker. Make sure the problem is reported there and properly described. Comment on the ticket to tell us how much of a problem the bug is for you. Add yourself to the ticket’s cc list so we can discuss requirements and keep you informed on progress. For feature requests it is very helpful if there is a description of how you would expect to interact with the new feature.
We would like to thank the large number of people who contributed to the last round of development work. I hope that attests to the fact that it is not too hard to get involved. Thanks also to the people who have followed development and reported bugs and feature requests.
libmpd is a binding to the MPD music playing daemon’s network protocol. While its interface has mostly stabilised and is ready to use, there is still some experimentation going on and we are seeking feedback on the API’s design.
The latest release is 0.3.0, which has support for UTF-8 encoded data, a number of QuickCheck and unit tests, and fixes some oddities in the interface. We plan to look into speeding up the network IO soon.
The development web page is at http://turing.une.edu.au/~bsinclai/code/libmpd-haskell/ and MPD can be found at http://www.musicpd.org/.
Gravatars (http://gravatar.com) are globally unique images associated with an email address, widely used in social networking sites. This library lets you find the URL of a gravatar image associated with an email address.
The Mersenne twister is a pseudorandom number generator developed by Makoto Matsumoto and Takuji Nishimura that is based on a matrix linear recurrence over a finite binary field. It provides for fast generation of very high quality pseudorandom numbers.
This library uses SFMT, the SIMD-oriented Fast Mersenne Twister, a variant of Mersenne Twister that is much faster than the original. It is designed to be fast when it runs on 128-bit SIMD. It can be compiled with either SSE2 OR PowerPC AltiVec support, to take advantage of these instructions.
By default the period of the function is 219937-1, however, you can compile in other defaults. Note that this algorithm on its own is not cryptographically secure.
cmath is a complete, efficient binding to the standard C math.h library, for Haskell.
hmatrix is a simple library for linear algebra and numerical computations, internally implemented using GSL, BLAS, and LAPACK. It is available from Hackage.
Most linear algebra functions mentioned in GNU-Octave’s Quick Reference are available both for real and complex matrices: eig, svd, chol, qr, hess, schur, inv, pinv, expm, norm, and det. There are also functions for numeric integration and differentiation, nonlinear minimisation, polynomial root finding, and more than 200 GSL special functions. A brief manual is available at the URL below.
Recent developments include support for 64bit machines, improved testing, and support for Intel’s MKL implementation of BLAS and LAPACK.
HPDF is an Haskell library allowing to generate PDF documents. HPDF is supporting several features of the PDF standard like outlines, multi-pages, annotations, actions, image embedding, shapes, patterns, text.
In addition to the standard PDF features, HPDF is providing some typesetting features built on top of the PDF core. With HPDF, it is possible to define complex styles for sentences and paragraphs. HPDF is implementing an optimum-fit line breaking algorithm a bit like the TeX one and HPDF is using the standard Liang hyphenation algorithm.
HPDF is at version 1.3. It is progressing continuously. HPDF is available on Hackage.
There are several missing features: the only supported fonts are the standard PDF ones. A next version should support TrueType and different character encodings. For support of Asian languages, I will ask for help in the Haskell community.
I also plan to define an API easing the definition of complex layouts (slides, books). Currently the layout has to be coded by hand, but it is already possible to build complex things.
The documentation is a bit weak and will have to be improved.
As part of his master thesis work, Peter van Keeken implemented a library to data mine logged Helium (→2.4) programs to investigate aspects of how students program Haskell, how they learn to program, and how good Helium is in generating understandable feedback and hints. The software can be downloaded from http://www.cs.uu.nl/wiki/bin/view/Hage/Neon, which also gives some examples of output generated by the system. The downloads only contain a small sample of loggings, but it will allow programmers to play with it.
Uniplate is a boilerplate removal library, with similar goals to the original Scrap Your Boilerplate work. It requires fewer language extensions, and allows more succinct traversals with higher performance than SYB. A paper including many examples was presented at the Haskell Workshop 2007.
If you are writing a compiler, or any program that operates over values with many constructors and nested types, you should be using a boilerplate removal library. This library provides a gentle introduction to the field, and can be used practically to achieve substantial savings in code size and maintainability.
This library defines a Haskell Interpreter monad. It allows to load Haskell modules, browse them, type-check and evaluate strings with Haskell expressions, and even coerce them into values. The operations are thread-safe and type-safe (even the coercion of expressions to values).
It may be useful for those who need GHCi-like functionality in their programs but do not want to mess with the GHC-API innards. Additionally, unlike the latter, hint provides an API that is consistent across GHC versions.
Works with GHC 6.6.x and 6.8.x.
The latest stable version can be downloaded from Hackage.
hs-plugins is a library for dynamic loading and runtime compilation of Haskell modules, for Haskell and foreign language applications. It can be used to implement application plugins, hot swapping of modules in running applications, runtime evaluation of Haskell, and enables the use of Haskell as an application extension language.
hs-plugins has been ported to GHC 6.8, and version 1.2 has been released.
HsColour is a small command-line tool (and Haskell library) that syntax-colourises Haskell source code for multiple output formats. It consists of a token lexer, classification engine, and multiple separate pretty-printers for the different formats. Current supported output formats are ANSI terminal codes, HTML (with or without CSS), LaTeX, and IRC chat codes. In all cases, the colours and highlight styles (bold, underline, etc.) are configurable. It can additionally place HTML anchors in front of declarations, to be used as the target of links you generate in Haddock (→4.2.1) documentation.
HsColour is widely used to make source code in blog entries look more pretty, to generate library documentation on the web, and to improve the readability of GHC’s intermediate-code debugging output. The current version is 1.9, adding the mIRC backend, and a few bugfixes.
A small, efficient, and portable regex library for Perl 5 compatible regular expressions. The PCRE library is a set of functions that implement regular expression pattern matching using the same syntax and semantics as Perl 5.
HStringTemplate is a port of the StringTemplate library to Haskell. StringTemplate is a templating system that enforces strict model-view separation via a Turing-incomplete grammar that nonetheless provides powerful recursive constructs. The library provides template grouping and inheritance, as well as escaping. It is especially suited for rapid and iterative development of web applications. HStringTemplate is currently at release 0.3.1 and is available via Hackage.
|Report by:||Henrique Ferreiro Garcia|
|Status:||Parses and pretty-prints almost all of Core Erlang|
CoreErlang is a Haskell library which consists of a parser and a pretty-printer for the intermediate language used by Erlang. The parser uses the Parsec library, and the pretty-printer was modelled after the corresponding module of the haskell-src package. It also exposes a Syntax module which will be used to implement several Core-to-Core optimisations.
It is not finished yet, but it parses almost all of Core Erlang and the pretty-printer works quite well.
In a very short period of time it will be published on Hackage.
Parse-dimacs is a Parsec parser for a common file format — DIMACS — describing conjunctive normal form (CNF) formulae. CNF formulae are typically used as input to satisfiability solvers.
The parser is available from Hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/parse-dimacs
The next release will concentrate on optimisation, specifically for large CNF formulae. The interface is simple and should be stable.
A graph language can be described with a graph grammar in a manner similar to a string grammar known from the theory of formal languages. In the last HCAR report we have introduced the Haskell library graph parser combinators. Therewith, several graph parsers can be implemented quite conveniently.
Unfortunately, it is quite complicated to realize a straightforward and reasonably efficient translation of so-called hyperedge replacement grammars (a context-free graph grammar formalism) to graph parsers. Problems are mainly caused by heavy non-determinism. Therefore, we have reimplemented our library in Curry (→3.2.1), a functional-logic programming language.
The Curry implementation provides two main benefits: Grammars can be translated to quite efficient parsers in a schematic way. Furthermore, parsers can be used as generators and for graph completion at the same time.
We exploit these nice properties in the domain of diagram editors. Here, graph grammars are used to define the syntax of visual languages, and graph completion appears to be very beneficial for the realization of powerful content assist. We have connected our framework to the diagram editor generator DiaGen (http://www.unibw.de/inf2/DiaGen).
Our application provides a strong motivation for further research into multi-paradigm declarative languages.
|Report by:||Richard A. Frost|
|Participants:||Rahmatullah Hafiz, Paul Callaghan|
The goal of the X-SAIGA project is to create algorithms and implementations which enable language processors (recognisers, parsers, interpreters, translators, etc.) to be constructed as modular and efficient embedded eXecutable SpecificAtIons of GrAmmars.
To achieve modularity, we have chosen to base our algorithms on top-down parsing. To accommodate ambiguity, we implement inclusive choice through backtracking search. To achieve polynomial complexity, we use memoisation. We have developed an algorithm which accommodates direct left-recursion using curtailment of search. Indirect left recursion is also accommodated using curtailment together with a test to determine whether previously computed and memoised results may be reused depending on the context in which they were created and the context in which they are being considered for reuse.
The algorithm is described more fully in Frost, R., Hafiz, R., and Callaghan, P. (2007) Modular and Efficient Top-Down Parsing for Ambiguous Left-Recursive Grammars. Proceedings of the 10th International Workshop on Parsing Technologies (IWPT), ACL-SIGPARSE. Pages: 109 – 120, June 2007, Prague. (http://cs.uwindsor.ca/~hafiz/iwpt-07.pdf)
We have implemented our algorithms, at various stages of their development, in Miranda (up to 2006) and in Haskell (from 2006 onwards). A description of a Haskell implementation of our 2007 algorithm can be found in Frost, R., Hafiz, R., and Callaghan, P. (2008) Parser Combinators for Ambiguous Left-Recursive Grammars. Proceedings of the 10th International Symposium on Practical Aspects of Declarative Languages (PADL), Paul Hudak, David Scott Warren (Eds.): Practical Aspects of Declarative Languages, 10th International Symposium, PADL 2008, San Francisco, CA, USA, January 7–8, 2008. Springer 2008, LNCS 4902, 167–181. (http://cs.uwindsor.ca/~hafiz/PADL_PAPER_FINAL.pdf)
The X-SAIGA website contains more information, links to other publications, proofs of termination and complexity, and Haskell code of the development version. (http://cs.uwindsor.ca/~hafiz/proHome.html)
We are currently extending our algorithm and implementation to accommodate executable specifications of fully-general attribute grammars.
One of our long-term goals is to use the X-SAIGA software to construct natural-language applications as executable specifications of attribute grammars and deploy them on the Public-Domain SpeechWeb, which is a related project of ours that is also funded by the Natural Science and Engineering Research Council of Canada (NSERC). More information on the SpeechWeb project, including details of how to access our prototype Public-Domain SpeechWeb by voice, and how to build and deploy your own speech applications, can be found at http://www.myspeechweb.org.
|Report by:||Jennifer Streb|
|Participants:||Garrin Kimmell, Nicolas Frisby, Mark Snyder, Philip Weaver, Perry Alexander|
|Maintainer:||Garrin Kimmell, Nicolas Frisby|
|Status:||beta, actively developed|
The InterpreterLib library is a collection of modules for constructing composable, monadic interpreters in Haskell. The library provides a collection of functions and type classes that implement semantic algebras in the style of Hutton and Duponcheel. Datatypes for related language constructs are defined as non-recursive functors and composed using a higher-order sum functor. The full AST for a language is the least fixed point of the sum of its constructs’ functors. To denote a term in the language, a sum algebra combinator composes algebras for each construct functor into a semantic algebra suitable for the full language, and the catamorphism introduces recursion. Another piece of InterpreterLib is a novel suite of algebra combinators conducive to monadic encapsulation and semantic re-use. The Algebra Compiler, an ancillary preprocessor derived from polytypic programming principles, generates functorial boilerplate Haskell code from minimal specifications of language constructs. As a whole, the InterpreterLib library enables rapid prototyping and simplified maintenance of language processors.
InterpreterLib is available for download at the link provided below. Version 1.0 of InterpreterLib was released in April 2007.
<nfrisby at ittc.ku.edu>
Data.ByteString provides packed strings (byte arrays held by a ForeignPtr), along with a list interface to these strings. It lets you do extremely fast IO in Haskell; in some cases, even faster than typical C implementations, and much faster than [Char]. It uses a flexible “foreign pointer” representation, allowing the transparent use of Haskell or C code to manipulate the strings.
Data.ByteString is written in Haskell98 plus the foreign function interface and cpp. It has been tested successfully with GHC 6.4, 6.6, 6.8, Hugs 2005–2006, and the head version of nhc98.
Bytestring 0.9.1.0 has been released, with full coverage data, an improved testsuite, and some key performance improvements.
Differences lists: a list-like type supporting O(1) append. This is particularly useful for efficient logging and pretty printing, (e.g., with the Writer monad), where list append quickly becomes too expensive.
Dimensional is a library providing data types for performing arithmetics with physical quantities and units. Information about the physical dimensions of the quantities/units is embedded in their types, and the validity of operations is verified by the type checker at compile time. The boxing and unboxing of numerical values as quantities is done by multiplication and division with units. The library is designed to, as far as is practical, enforce/encourage best practices of unit usage.
The core of dimensional is stable with additional units being added on an as-needed basis. In addition to the si system of units, dimensional has experimental support for user-defined dimensions and a proof-of-concept implementation of the cgs system of units. I am also experimenting with forward automatic differentiation and rudimentary linear algebra.
The current release is compatible with ghc 6.6.x and above and can be downloaded from Hackage or the project web site. The primary documentation is the literate Haskell source code, but the wiki on the project web site has a few usage examples to help with getting started.
The Darcs repo has moved to http://code.haskell.org/dimensional.
|Report by:||Henning Thielemann|
|Participants:||Dylan Thurston, Mikael Johansson|
|Status:||experimental, active development|
The hierarchy of numerical type classes is revised and oriented at algebraic structures. Axiomatics for fundamental operations are given as QuickCheck properties, superfluous super-classes like Show are removed, semantic and representation-specific operations are separated, the hierarchy of type classes is more fine grained, and identifiers are adapted to mathematical terms.
There are both certain new type classes representing algebraic structures and new types of mathematical objects. Currently supported algebraic structures are
There is also a collection of mathematical object types, which is useful both for applications and testing the class hierarchy. The types are
Using the revised system requires hiding some of the standard functions provided by Prelude, which is fortunately supported by GHC. The library has basic Cabal support and a growing test-suite of QuickCheck tests for the implemented mathematical objects.
Each data type now resides in a separate module. Cyclic dependencies could be eliminated by fixing some types in class methods. E.g., power exponents became simply Integer instead of Integral, which has also the advantage of reduced type defaulting.
Collect more Haskell code related to mathematics, e.g., for linear algebra. Study of alternative numeric type class proposals and common computer algebra systems.
A still unsolved problem arises for residue classes, matrix computations, infinite precision numbers, fixed point numbers, and others. It should be possible to assert statically that the arguments of a function are residue classes with respect to the same divisor, or that they are vectors of the same size. Possible ways out are encoding values in types or local type class instances. The latter one is still neither proposed nor implemented in any Haskell compiler. The modules are implemented in a way to keep all options open. That is, for each number type there is one module implementing the necessary operations which expect the context as a parameter. Then there are several modules which provide different interfaces through type class instances to these operations.
|Report by:||Oleg Kiselyov|
|Participants:||Ralf Lämmel, Keean Schupke, Gwern Branwen|
HList is a comprehensive, general purpose Haskell library for typed heterogeneous collections including extensible polymorphic records and variants (→1.6.5). HList is analogous to the standard list library, providing a host of various construction, look-up, filtering, and iteration primitives. In contrast to the regular lists, elements of heterogeneous lists do not have to have the same type. HList lets the user formulate statically checkable constraints: for example, no two elements of a collection may have the same type (so the elements can be unambiguously indexed by their type).
An immediate application of HLists is the implementation of open, extensible records with first-class, reusable, and compile-time only labels. The dual application is extensible polymorphic variants (open unions). HList contains several implementations of open records, including records as sequences of field values, where the type of each field is annotated with its phantom label. We, and now others (Alexandra Silva, Joost Visser: PURe.CoddFish project), have also used HList for type-safe database access in Haskell. HList-based Records form the basis of OOHaskell (http://darcs.haskell.org/OOHaskell). The HList library relies on common extensions of Haskell 98.
The HList repository is available via Darcs: http://darcs.haskell.org/HList
The main change since the last report was the addition of a large set of patches by Gwern Branwen <gwern0 at gmail.com>, to arrange the library within the Data.HList hierarchy, to update the code for GHC 6.8.2 (using the LANGUAGE pragma, eliminating causes of GHC warnings), to build the library with the latest version of Cabal. He also uploaded the library to Hackage. Many thanks to Gwern Branwen.
Data.List.Stream provides the standard Haskell list data type and api, with an improved fusion system, as described in the papers “Stream Fusion” and “Rewriting Haskell Strings”. Code written to use the Data.List.Stream library should run faster (or at worst, as fast) as existing list code. A precise, correct reimplementation is a major goal of this project, and Data.List.Stream comes bundled with around 1000 QuickCheck properties, testing against the Haskell98 specification, and the standard library.
The latest version of the stream-fusion package is now available from Hackage.
Source and documentation can be found at: http://www.cse.unsw.edu.au/~dons/streams.html
Edison is a library of purely function data structures for Haskell originally written by Chris Okasaki. Conceptually, it consists of two things:
In theory, either component may be used independently of the other.
I took over maintenance of Edison in order to update Edison to use the most current Haskell tools. The following major changes have been made since version 1.1, which was released in 1999.
Edison is currently in maintain-only mode. I do not have the time required to enhance Edison in the ways I would like. If you are interested in working on Edison, do not hesitate to contact me.
The biggest thing that Edison needs is a benchmarking suite. Although Edison currently has an extensive unit test suite for testing correctness, and many of the data structures have proven time bounds, I have no way to evaluate or compare the quantitative performance of data structure implementations in a principled way. Unfortunately, benchmarking data structures in a non-strict language is difficult to do well. If you have an interest or experience in this area, your help would be very much appreciated.
This library provides a wrapper to mmap(2), allowing files or devices to be lazily loaded into memory as strict or lazy ByteStrings (→5.5.1), using the virtual memory subsystem to do on-demand loading.
|Report by:||Lennart Kolmodin|
|Participants:||Duncan Coutts, Don Stewart, Binary Strike Team|
The Binary Strike Team is pleased to announce yet another release of a new, pure, efficient binary serialisation library.
The “binary” package provides efficient serialisation of Haskell values to and from lazy ByteStrings. ByteStrings constructed this way may then be written to disk, written to the network, or further processed (e.g., stored in memory directly, or compressed in memory with zlib or bzlib).
The binary library has been heavily tuned for performance, particularly for writing speed. Throughput of up to 160M/s has been achieved in practice, and in general speed is on par or better than NewBinary, with the advantage of a pure interface. Efforts are underway to improve performance still further. Plans are also taking shape for a parser combinator library on top of binary, for bit parsing and foreign structure parsing (e.g., network protocols).
Data.Derive has support for automatically generating Binary instances, allowing to read and write your data structures with little fuzz.
Binary was developed by a team of 8 during the Haskell Hackathon in Oxford 2007, and since then about 15 people have contributed code and many more given feedback and cheerleading on #haskell (→1.2).
The package is cabalised and available through Hackage.
darcs get –partial http://code.haskell.org/binary
The latest version is 4.1.0. Contributions since the last update include: TEA, BubbleBabble, HMAC, and more flavours of SHA.
The interface to SHA-1 is still different from MD5, and the whole library still needs a rethink. Unfortunately, I still do not have the time to undertake much work on it at the moment and it is not clear when I will have more time. I am still therefore looking for someone to help keeping the repository up-to-date with contributions, re-structuring the library, and managing releases.
This release contains:
The current release remains 0.0.11, which contains functions to handle ASN.1, X.509, PKCS#8, and PKCS#1.5.
This still has a dependency on NewBinary but we now have a way of removing this to use ByteStrings (→5.5.1), although the work remains to be done.
The current version handles the Basic Encoding Rules (BER). In addition, even more work (over 400 Darcs patches) has been undertaken on handling the Packed Encoding Rules (PER) using a GADT to represent the Abstract Syntax Tree (we will probably move the BER to use the same AST at some point). Interestingly, this has resulted in us finding a small bug in the ASN.1 specification which we have reported to the ITU. You can download the current working version and try the unit and QuickCheck property tests for PER. These are not yet built by Cabal.
This release supports:
|Report by:||Tiago Miguel Laureano Alves|
|Participants:||Joost Visser, Pablo Berdaguer, Alcino Cunha, JoséNuno Oliveira, Hugo Pacheco|
A two-level data transformation consists of a type-level transformation of a data format coupled with value-level transformations of data instances corresponding to that format. Examples of two-level data transformations include XML schema evolution coupled with document migration, and data mappings used for interoperability and persistence.
In the 2LT project, support for two-level transformations is being developed using Haskell, relying in particular on generalised abstract data types (GADTs). Currently, the 2LT package offers:
A release of 2LT is available from the project URL.
Recently, the 2LT project has been migrated to Google Code. New functionality is planned, such as elaboration of the front-ends and the creation of a web interface.
Takusen is a library for accessing DBMS’s. Like HSQL, we support arbitrary SQL statements (currently strings, extensible to anything that can be converted to a string).
Takusen’s “unique selling-point” is safety and efficiency. We statically ensure that all acquired database resources such as cursors, connection, and statement handles are released, exactly once, at predictable times. Takusen can avoid loading the whole result set in memory and so can handle queries returning millions of rows, in constant space. Takusen also supports automatic marshaling and unmarshaling of results and query parameters. These benefits come from the design of query result processing around a left-fold enumerator.
Currently we fully support Oracle, Sqlite, and PostgreSQL, and ODBC support exists but is not fully tested.
Since the last report we have:
Session Types provide a way to express communication protocols. They specify, for a bi-directional channel, who says what, in what order, and to whom. Looping and branching structures are supported. Thus a session type adds a type to a communication channel, ensuring that use of the channel is safe (i.e., when one party speaks, the others listen, and that the type of the value sent is the type of the value expected to be received). Thus, Session Types offer temporal information which is absent from all other concurrency techniques.
The focus of the library is on the communication between threads. However, work is progressing on supporting fully distributed operation. The library supports forking new processes with channels; creating new channels between existing processes; the communication of process IDs; the communication of channels (higher-order channels or delegation); subtyping of Pids; and some initial work on real distributed operation over Handles.
Current development is rapid and is focusing on building up a strong suite of examples and networked operation. Recent features have added support for higher-order channels and a new DSL for specifying Session Types (which supports lexical scoping and is composable).
If you are doing any multi-threaded development in Haskell and find the properties and simplicity of message passing concurrency attractive, then I strongly encourage you to take a look at Session Types.
|Report by:||Edward Kmett|
Haskell has derived a lot of benefits from the tools it has borrowed from category theory, such as functors, monads, and arrows. However, there are a lot more tools out there. For instance, comonads have been barely touched by Haskell programmers. This library attempts to collect many of the more interesting bits of category theory and constructive algorithmics in one place and generalise or dualize previous results where possible.
The library includes:
The IOSpec library provides a pure specification of several functions in the IO monad. This may be of interest to anyone who wants to debug, reason about, analyse, or test impure code.
The IOSpec library is essentially a drop-in replacement for several other modules, such as Data.IORef and Control.Concurrent. Once you are satisfied that your functions are reasonably well-behaved with respect to the pure specification, you can drop the IOSpec import in favour of the “real” IO modules. The ideas underlying the previous version are described by a recent Haskell Workshop paper.
The latest version, however, supports several exciting new features. Besides providing a pure specification of STM, it allows you to build your own pure IO monad a la carte — allowing you to be explicit about which effects your program is using.
In the next major release, I would like to incorporate efficiency improvements suggested by Janis Voigtländer and allow the extraction of an IO computation from its pure counterpart.
If you use IOSpec for anything useful at all, I would love to hear from you.
Gtk2Hs is a GUI Library for Haskell based on Gtk+. Gtk+ is an extensive and mature multi-platform toolkit for creating graphical user interfaces.
GUIs written using Gtk2Hs use themes to resemble the native look on Windows and, of course, various desktops on Linux, Solaris, and FreeBSD. Gtk+ and Gtk2Hs also support Mac OS X (it currently uses the X11 server, but a native port is in progress — see below).
The Gtk2Hs library is in a usable state and many parts of the API can be considered stable. We received very positive feedback on tutorials which were contributed by various people (→1.6.4). In fact, although the maintainers of Gtk2Hs manage to improve the library itself, we would greatly benefit from more outside help, for instance, somebody who can create releases and fix (configuration) bugs. The reason that we have not released in the last six months is simply lack of time (and access to platforms), even though the list and tree widgets have been complete since January. We furthermore implemented more functions in Pango, the font rendering engine. More frequent releases are needed to get these improvements to people who rely on binary installers.
Our further development goals are as follows: In the medium term we hope to support the new features in Gtk+ 2.12 and to improve the signals API. In the longer term we hope to modularise Gtk2Hs and enable it to be built and distributed with Cabal and Hackage. A promising recent development is that Gtk+’s native (non-X11) backend for Mac OS X has got to the point where Gtk2Hs can be built against it and most of the demo programs work. It would be great if we could find somebody who can create a Mac OS installer that bundles Gtk2Hs with the native Aqua port of Gtk.
|Report by:||Wolfgang Jeltsch|
Grapefruit is a library for creating graphical user interfaces and animated graphics in a declarative way.
Fundamental to Grapefruit is the notion of signal. A signal denotes either a time-varying value (the continuous case) or a sequence of values assigned to discrete points in time (the discrete case). Signals can be constructed in a purely functional manner.
User interfaces are described as systems of interconnected components which communicate via signals. To build such systems, the methods from the Arrow and ArrowLoop classes are used. For describing animated graphics, a special signal type exists.
Grapefruit also provides list signals. A list signal is a list-valued signal which can be updated incrementally and thus efficiently. In addition, a list signal associates an identity with each element, so that moving an element within the list can be distinguished from removing the element and adding it again. List signals can be used to describe dynamic user interfaces, i.e., user interfaces with a changing set of components and changing order of components.
Grapefruit descriptions of user interfaces and animations always cover their complete lifetime. No explicit event handler registrations and no explicit recalculations of values are necessary. This is in line with the declarative nature of Haskell because it stresses the behaviour of GUIs and animations instead of how this behaviour is achieved. Internally, though, Grapefruit is implemented efficiently using a common event dispatching and handling mechanism.
Grapefruit is currently based on Gtk2Hs (→5.8.1) and HOpenGL, but implementations on top of other GUI and graphics libraries are possible. The aim is to provide alternative implementations based on different GUI toolkits, so that a single application is able to integrate itself into multiple desktop environments.
Shellac is a framework for building read-eval-print style shells. Shells are created by declaratively defining a set of shell commands and an evaluation function. Shellac supports multiple shell backends, including a “basic” backend, which uses only Haskell IO primitives, and a fully featured “readline” backend based on the Haskell readline bindings found in the standard libraries.
This library attempts to allow users to write shells in a declarative way and still enjoy the advanced features that may be available from a powerful line editing package like readline.
Shellac is available from Hackage, as is the related Shellac-readline package.
Shellac has been successfully used by several independent projects, and the API is now fairly stable. I will likely be releasing an officially “stable” version in the not-too-distant future. I anticipate few changes from the current version.
The diagrams library provides an embedded domain-specific language for creating simple pictures and diagrams, built on top of the Cairo rendering engine. Values of type Diagram are built up in a compositional style from various primitives and combinators, and can be rendered to a physical medium, such as a file in PNG, PS, PDF, or SVG format.
For example, consider the following diagram to illustrate the 24 permutations of four objects:
The diagrams library was used to create this diagram with very little effort (about ten lines of Haskell, including the code to actually generate permutations). The source code for this diagram, as well as other examples and further resources, can be found at http:/code.haskell.org/diagrams/.
The library is still in its infancy and is under active development. New contributors and testers are welcome! In particular, this is an ideal project for anyone relatively new to Haskell who is seeking to contribute to a project as a way of becoming more comfortable with the language.
|Report by:||George Giorgidze|
YampaSynth is a purely functional framework for programming modular synthesisers in Haskell using Yampa, a domain specific language embedded in Haskell for programming hybrid systems. A synthesiser, be it a hardware instrument or a pure software implementation, as here, is said to be modular if it provides sound-generating and sound-shaping components that can be interconnected in arbitrary ways.
Basic sound-generating and sound-shaping modules have been implemented, e.g., oscillator, amplifier, mixer, envelope generator, filter, etc. These modules are used to develop example applications:
The source code, together with example applications, has been cabalised and is available under the BSD3 license.
We would like to see a richer collection of sound-generating and sound-shaping modules in the framework, and complete implementation of MIDI, SoundFont, and related standards. However, one might find some other interesting continuation of the work. We are open for suggestions and would be happy if someone wishes to collaborate.
|Report by:||Henning Thielemann|
|Status:||experimental, active development|
Haskore is a Haskell library originally written by Paul Hudak that allows music composition within Haskell, i.e., without the need of a custom music programming language. This collaborative project aims at improving consistency, adding extensions, revising design decisions, and fixing bugs. Specific improvements include:
HVAC (Http View and Controller) is a lightweight web application framework based on FastCGI and STM, with an emphasis on concise declarative code and concurrent atomic and transactional logic. The unique feature of hvac is that each request is processed within a single logical transaction that bridges STM, database, and filesystem access. Programs written using hvac are therefore terse and easy to reason about, and should be able to be written largely with applicative combinators alone. Also provided are simply deployed caching combinators with static guarantees as to their properties, as well as a sophisticated library of controller combinators for RESTful programming and a system for typed, composable validation. Future work includes a strong test suite to verify hvac’s atomic properties and a strongly statically typed database DSL. An official first release is forthcoming shortly.
|Report by:||Uwe Schmidt|
|Status:||seventh major release (current release: 8.0.0)|
The Haskell XML Toolbox (HXT) is a collection of tools for processing XML with Haskell. It is itself purely written in Haskell 98. The core component of the Haskell XML Toolbox is a validating XML-Parser that supports almost fully the Extensible Markup Language (XML) 1.0 (Second Edition). There is a validator based on DTDs and a new more powerful one for Relax NG schemas.
The Haskell XML Toolbox is based on the ideas of HaXml (→5.10.3) and HXML, but introduces a more general approach for processing XML with Haskell. The processing model is based on arrows. The arrow interface is more flexible than the filter approach taken in the earlier HXT versions and in HaXml. It is also safer; type checking of combinators becomes possible with the arrow approach.
In a master student’s project done by Christian Uhlig, the development of a web server called Janus has been finished. The title is A Dynamic Webserver with Servlet Functionality in Haskell Representing all Internal Data by Means of XML. HXT has been used for processing all internal data of this web server. The Janus server is highly configurable and can be used not only as HTTP server, but for various other server-like tasks. This server is used and will be further developed and extended within another project called Holumbus (→6.3.1), a framework for developing specialised search engines. The Janus system is available via http://darcs.fh-wedel.de/janus. Current activity consists of testing, example applications, demos, and documentation. An application of Janus and Holumbus is the Hayoo! search engine (http://holumbus.fh-wedel.de/hayoo/).
The Haskell XML Toolbox Web page (http://www.fh-wedel.de/~si/HXmlToolbox/index.html) includes downloads, online API documentation, a cookbook with nontrivial examples of XML processing using arrows and RDF documents, and master theses describing the design of the toolbox, the DTD validator, the arrow based Relax NG validator, and the XSLT system. A getting started tutorial about HXT is available in the Haskell Wiki (http://www.haskell.org/haskellwiki/HXT ).
HaXml provides many facilities for using XML from Haskell. The public stable release is 1.13.3, with support for building via Cabal for ghc-6.8.x.
The development version (currently at 1.19, also available through a Darcs repository) includes a much-requested lazy parser and a SAX-like streaming parser. Some minor work still remains to tidy things up before the development version is tagged and released as stable.
The lazy parser combinators used by HaXml now live in a separate library package called polyparse.
TagSoup is a library for extracting information out of unstructured HTML code, sometimes known as tag-soup. The HTML does not have to be well formed, or render properly within any particular framework. This library is for situations where the author of the HTML is not cooperating with the person trying to extract the information, but is also not trying to hide the information.
The library provides a basic data type for a list of unstructured tags, a parser to convert HTML into this tag type, and useful functions and combinators for finding and extracting information. The library has seen real use in an application to give Hackage (→5.1) listings, and is used in the next version of Hoogle (→4.4.1).
Work continues on the API of tagsoup, and the implementation. Lots of people have made use of tagsoup in their applications, generating lots of valuable feedback. A new version of tagsoup is imminent.
WASH/CGI is an embedded DSL (read: a Haskell library) for server-side Web scripting based on the purely functional programming language Haskell. Its implementation is based on the portable common gateway interface (CGI) supported by virtually all Web servers. WASH/CGI offers a unique and fully-typed approach to Web scripting. It offers the following features:
Completed items are:
Current work includes:
The WASH Webpage (http://www.informatik.uni-freiburg.de/~thiemann/WASH/) includes examples, a tutorial, a draft user manual, and papers about the implementation.
“hinotify” is a simple Haskell wrapper for the Linux kernel’s inotify mechanism. inotify allows applications to watch file changes, since Linux kernel 2.6.13. You can for example use it to do a proper locking procedure on a set of files, or keep your application up do date on a directory of files in a fast and clean way.
As file and directory notification is available for many operating systems, upcoming work will include to try to find a common API that could be shared for all platforms. Most recent work has been to see what is possible to do under Microsoft Windows, and finding a suitable API for both platforms. This has been a joint work with Niklas Broberg. We are still looking for contributors to *BSD and Mac OS X. If you are interested, contact us.
hspread is a client library for the Spread toolkit. It is fully implemented in Haskell using the binary package (→5.6.2) for fast parsing of network packets. Its aim is to make it easier to implement correct distributed applications by taking advantage of the guarantees granted by Spread, such as reliable and total ordered messages, and it supports the most recent version of the protocol.
There is interest in further developing a higher level framework for Haskell distributed programming by extending the protocol if necessary.
darcs get http://happs.org/repo/hspread
Harpy is a library for run-time code generation of IA-32 machine code. It provides not only a low level interface to code generation operations, but also a convenient domain specific language for machine code fragments, a collection of code generation combinators, and a disassembler. We use it in two independent (unpublished) projects: On the one hand, we are implementing a just-in-time compiler for functional programs, on the other hand, we use it to implement an efficient type checker for a dependently typed language. It might be useful in other domains where specialised code generated at run-time can improve performance.
Harpy’s implementation makes use of the foreign function interface, but only contains functions written in Haskell. Moreover, it has some uses of other interesting Haskell extensions, as for example multi-parameter type classes to provide an in-line assembly language, and Template Haskell to generate stub functions to call run-time generated code. The disassembler uses Parsec to parse the instruction stream.
We intend to implement supporting operations for garbage collectors cooperating with run-time generated code.
A second release is forthcoming, featuring improvements in the memory management, better floating point instruction support, and named labels that are shown in the disassembler output.
Darcs is a distributed revision control system written in Haskell. In Darcs, every copy of your source code is a full repository, which allows for full operation in a disconnected environment, and also allows anyone with read access to a Darcs repository to easily create their own branch and modify it with the full power of Darcs’ revision control. Darcs is based on an underlying theory of patches, which allows for safe reordering and merging of patches even in complex scenarios. For all its power, Darcs remains a very easy to use tool for every day use because it follows the principle of keeping simple things simple.
The year 2008 has seen some major milestones reached in the development of Darcs: years of development of Darcs-2 culminated in a recent release of Darcs 2.0.0. Darcs 220.127.116.11 introduces a number of new features and bug fixes big and small. The most notable changes include fixing the “conflict bug”, supporting a new repository format with refined semantics, adding hashed repositories for increased speed and robustness, and a global cache for faster downloading of patches. In the 1.x branch of Darcs, since the last HCAR, support for GHC 6.8.x was added.
Darcs has also seen a revamp of its infrastructure. User visible changes include a consolidation of the stable and unstable development branches, and separation of functions for the two mailing lists. Future work for Darcs will centre around building interesting new features using the new repository format, continued optimising and code restructuring, improving testing of Darcs and adding buildbots, and completing transition to “type witnesses” to ensure provably correct patch manipulation.
Patches great and small are heartily welcomed!
Darcs is free software licensed under the GNU GPL.
xmonad is a tiling window manager for X. Windows are arranged automatically to tile the screen without gaps or overlap, maximising screen use. Window manager features are accessible from the keyboard: a mouse is optional. xmonad is written, configured, and extensible in Haskell. Custom layout algorithms, key bindings, and other extensions may be written by the user in config files. Layouts are applied dynamically, and different layouts may be used on each workspace. Xinerama is fully supported, allowing windows to be tiled on several physical screens.
The new release 0.7 of xmonad added full support for the GNOME and KDE desktops, and adoption continues to grow, with binary packages of xmonad available for all major distributions.
|Report by:||Bastiaan Heeren|
|Participants:||Alex Gerdes, Johan Jeuring, Josje Lodder, Harrie Passier, Sylvia Stuurman|
|Status:||experimental, active development|
At the Open Universiteit Nederland we are building a collection of tools that support students in solving exercises incrementally by checking intermediate steps. All our tools are completely written in Haskell. The distinguishing feature of our tools is the detailed feedback that they provide, on several levels. For example, we have an online exercise assistant that helps to rewrite logical expressions into disjunctive normal form. Students get instant feedback when solving an exercise, and can ask for a hint at any point in the derivation. Other areas covered by our tools are solving linear equations, reducing matrices to echelon normal form, and basic operations on fractions.
The simplest kind of error to deal with are the syntactical errors, for which we use an error correcting parser combinator library. For each exercise domain, we have formulated a set of rewrite rules, as well as a number of unsound (or buggy) rules to catch common mistakes. With these rules we can check all intermediate steps submitted by the user. We also defined strategies for solving the exercises. A strategy dictates in which order the rules have to be applied to reach the solution, and such a strategy takes the form of a context-free grammar. Strategies are a powerful means to report helpful and informative feedback.
We are offering our tools and our strategies as services to other e-learning tools and environments, such as MathDox and LeActiveMath. For the near future, we have scheduled sessions with students from our university to validate our approach, and to collect information about the usability of our tools. We plan to use generic programming techniques to support exercises from many more, different domains.
An online prototype version for rewriting logical expressions is available and can be accessed from our project page.
Years ago, Jurriaan Hage developed Marble to detect plagiarism among Java programs. Marble was written in Perl, takes just 660 lines of code and comments, and does the job well. The techniques used there, however, do not work well for Haskell, which is why a master thesis project was started, starring Brian Vermeer as the master student, to see if we can come up with a working system to discover plagiarism among Haskell programs. We are fortunate to have a large group of students each year that try their hand at our functional programming course (120-130 per year), and we have all the loggings of Helium that we hope can help us tell whether the system finds enough plagiarism cases. The basic idea is to implement as many metrics as possible, and to see, empirically, which combination of metrics scores well enough for our purposes. The implementation will be made in Haskell. One of the things that we are particularly keen about, is to make sure that for assignments in which students are given a large part of the solution and they only need to fill in the missing parts, we still obtain good results.
Geordi is an IRC bot that compiles and (optionally) runs C++ code snippets. It has proved to be a very useful tool when teaching and discussing C++ on IRC. It is written in Haskell, and, being deployed on C++ channels at most of the big IRC networks, has the sneaky side-effect of getting some C++’ers interested in Haskell ;-).
Snapshots and Darcs repository can be found at the homepage.
The Lambda Shell is a feature-rich shell environment and command-line tool for evaluating terms of the pure, untyped lambda calculus. The Lambda Shell builds on the shell creation framework Shellac (→5.8.3), and showcases most of Shellac’s features.
Features of the Lambda Shell include:
The Lambda Shell was written as a showcase and textbook example for how to use the Shellac shell-creation library. However, it can also be used to gain a better understanding of the pure lambda calculus.
|Report by:||Miguel Vilaca|
|Portability:||portable (depends on wxHaskell)|
INblobs is an editor and interpreter for Interaction Nets — a graph-rewriting formalism introduced by Lafont, inspired by Proof-nets for Multiplicative Linear Logic.
INblobs is built on top of the front-end Blobs from Arjan van IJzendoorn, Martijn Schrage, and Malcolm Wallace.
The tool is being developed using the repository system Darcs.
A new plugin that will allow INblobs to compile a textual functional program into Interaction Nets is being developed.
|Report by:||Uwe Schmidt|
|Participants:||Timo B. Hübel, Sebastian Schlatt, Stefan Schmidt|
|Status:||first beta release|
The Holumbus framework consists of a set of modules and tools for creating fast, flexible, and highly customisable search engines with Haskell. The framework consists of two main parts. The first part is the indexer for extracting the data of a given type of documents, e.g., documents of a web site, and store it in an appropriate index. The second part is the search engine for querying the index.
An instance of the Holumbus framework is the Haskell API search engine Hayoo! (http://holumbus.fh-wedel.de/hayoo/). The web interface for Hayoo! is implemented with the Janus web server, written in Haskell and based on HXT (→5.10.2).
Currently the indexer module will further be developed and extended, such that the configuration about the relevant information especially in web pages to be indexed becomes simple and easy. During this activity, new use cases of the framework will be implemented.
In another Master Thesis the distributed query evaluation in a network of machines is tackled. Here we will try to adopt ideas from the Google map–reduce approach for Holumbus. The aim is to develop a generally applicable map–reduce like framework and take the Holumbus search and index manipulation as a serious test case.
The Holumbus web page (http://holumbus.fh-wedel.de/) includes downloads, Darcs web interface, current status, requirements, and documentation. Timo Hübel’s Master Thesis describing the Holumbus index structure and the search engine is available at http://holumbus.fh-wedel.de/branches/develop/doc/thesis-searching.pdf.
Top Writer is a web application for technical writers to easily edit and assemble topic-oriented user guides and other high level references works. Application users edit within a structured framework, using meaningful application elements to chunk and contain content. Users can extend the application with their own elements and rules, if needed. Delivery of content is meant to be multi-format, with each format having separate templating rules.
Currently, the focus for delivering output is on generated HTML, but plans are also to generate PDF and any other format that is reasonable.
tiddlyisar is a tool for generating TiddlyWiki renderings of IsarMathLib source. IsarMathLib is a library of mathematical proofs formally verified by the Isabelle/ZF theorem proving environment. The tiddlywiki tool parses IsarMathLib source and generates TiddlyWiki markup text. The generated view features jsMath based mathematical symbols, cross referenced theorems, and structured proofs expanded on request. The rendering can be viewed on the Tiddly Formal Math site. tiddlyisar will be included in the next release of the IsarMathLib distribution under GPLv3 license.
Emping 0.5 has been released. Emping is a (prototype of) a tool for the analysis of multi-variate nominal data. For example, in a table of 8000 mushrooms and 20 attributes, constructed from a field guide, the tool finds which attribute-values determine whether a mushroom is edible or poisonous. But Emping finds not only single factors, but also pairs, triples, and all other combinations which distinguish between the consequent values. Such reduced rules are generalisations of rows in the original table, so r1 could stand for originals a,b,c and r2 for a,b. In that case r2 implies r1 or, conversely, r1 entails r2. The reductions are partially ordered. Emping also finds all such dependencies, including the equivalences where different reductions stand for the same original rules. New in Emping 0.5 is that, thanks to the functional graph library which comes with GHC, these dependencies are now expressed in a Graphviz format and can be shown with a Graphviz reader. Also new is the sort by length of the reduced rules, and of the reduced rules in each equivalence class. This makes the results much more readable. Starting in 0.4, it is now also possible to have blank fields, but this feature has only been summarily tested. The Gtk2Hs based GUI (→5.8.1), first introduced in version 0.4, has been improved in Emping 0.5. Data tables, as well as output tables of reduced rules and graph legends, all use the default CSV format of the Open Office Calc spreadsheet.
See http://home.telfort.nl/sp969709/emp/empug.html for more, including two white papers and downloads.
|Report by:||Tiago Miguel Laureano Alves|
SdfMetz supports grammar engineering by calculating grammar metrics and other analyses. Currently it supports four different grammar formalisms (SDF, DMS, Antlr, and Bison) from which it calculates size, complexity, structural, and ambiguity metrics. Output is a textual report or in Comma Separated Value format. The additional analyses implemented are visualisation, showing the non-singleton levels of the grammar, or printing the grammar graph in DOT format. The definition of all except the ambiguity and the NPath metrics were taken from the paper A metrics suite for grammar based-software by James F. Power and Brian A. Malloy. The ambiguity metrics were defined by the tool author exploiting specific aspects of SDF grammars, and the NPath metric definition was taken from the paper NPATH: a measure of execution path complexity and its applications.
Efforts are underway to develop functionalities to compute quality profiles based on histograms. Furthermore, more metrics will be added, and a web-interface is planned.
The tool was initially developed in the context of the IKF-P project (Information Knowledge Fusion, http://ikf.sidereus.pt/) to develop a grammar for ISO VDM-SL.
The web site of SdfMetz (http://wiki.di.uminho.pt/wiki/bin/view/PURe/SdfMetz) includes tables of metric values for a series of SDF grammar as computed by SdfMetz. The tool is distributed as part of the UMinho Haskell Libraries and Tools.
In this project, audio signals are processed using pure Haskell code. The highlights are:
We have checked three approaches, where the last one is the most promising.
A combination of the last two approaches seems to be a good choice so far. However, maintaining all code versions for comparison purposes led to much code duplication in the meantime.
The library comes with basic Cabal support and requires the Numeric Prelude framework (→5.5.4) of revised numeric type classes.
hmp3 is a curses-based mp3 player frontend to mpg321 and mpg123. It is written in Haskell. It is designed to be simple, fast, and robust. It is very stable. hmp3 has been updated to version 1.5.1, and is available from Hackage.
Glome is a rendering engine for 3-D graphics, based on ray tracing. It was originally written in OCaml, but has since been ported (except for a few features) to Haskell, and most future development is likely to happen in Haskell.
It supports shadows and reflections, and base primitives include triangles, disks, boxes, cylinders, cones, spheres, boxes, and planes. More complex primitives can be made by grouping primitives, taking the boolean difference or intersection of primitives, or by making transformed instances.
Input and output capabilities are limited. Input is accepted as NFF-format files, or scenes may also be hard-coded in Haskell. Output is via an OpenGL window.
Rendering speed is reasonably fast, but a little too slow for interactive graphics. Glome uses a Bounding Interval Hierarchy internally to reduce the number of ray-intersection tests, so that, in general, rendering time increases logarithmically with scene complexity rather than linearly.
The easyVision project is a collection of libraries for fast prototyping of simple computer vision and image processing applications. We take advantage of Haskell’s expressive power without any performance loss, since most of the computationally expensive computations are done by optimised libraries: HOpenGL, hmatrix (→5.2.5), Intel’s IPP and MPlayer.
The interface to most image processing functions is now purely functional, based on a first approach to automatic generation of IPP wrappers. Other recent developments include new experimental modules for Kalman filtering and geometric algebra, and more illustrative examples.
The Galculator is a prototype of a proof assistant based on the algebra of Galois connections. When combined with the pointfree transform and tactics such as the indirect equality principle, Galois connections offer a very powerful, generic device to tackle the complexity of proofs.
The prototype of Galculator is being developed under an ongoing PhD project. It is still experimental and things tend to change quickly, but we are close to release the first usable version.
The source code is available from a public SVN repository accessible from the project homepage. After reaching its first version it will also be available from Hackage.
Currently, we are working on the automatic derivation of the so-called “free-theorems” of polymorphic functions (→3.3.2) and their application to proofs. Moreover, more complex constructions of Galois connections are also being studied. Finally, we plan to integrate the Galculator with a theorem prover, namely Coq.
|Report by:||Denis Bueno|
|Status:||First release imminent, repository available|
funsat (mnemonic: functional SAT solver) is a modern satisfiability solver in Haskell, intended to be competitive with state-of-the-art solvers (which are mostly written in C/C++). The strategy is to draw on many ideas from the literature and implement them in a way that is functional, testable, and difficult to accomplish concisely in a lower-level language. Currently the emphasis is on techniques for solving structured, rather than randomised, instances.
Funsat can solve many structured instances from satlib (http://www.cs.ubc.ca/~hoos/SATLIB/benchm.html) including PARITY (16 series), BF, blocksworld, and logistics. Many are solved in a few seconds.
The code in its current state is available as a git repository:
$git clone http://churn.ath.cx/funsat
The immediate priority is an initial release, which should happen shortly after the end of May.
Sat-micro-hs is a Haskell port of the OCaml satisfiability (SAT) solver described in “SAT-Micro: petit mais costaud!” (“SAT-Micro: small but strong!”, see below). The paper describes a minimal solver with the flavour of a modern SAT solver, without the robustness necessary for solving hard SAT instances. This port is intended for those interested in SAT generally, as well as any interested in the paper specifically, but who do not read French.
The code is available from Hackage at http://hackage.haskell.org/cgi-bin/hackage-scripts/package/sat-micro-hs.
Sylvain Conchon, Johannes Kanig, and Stephane Lescuyer. “SAT-Micro: petit mais costaud!” In Dix-neuviemes Journees Francophones des Langages Applicatifs, Etretat, France, 2008. INRIA. Available online at http://www.lri.fr/~conchon/publis/conchon-jfla08.ps.
Saoithin (pronounced “Swee-heen”) is a GUI-based 2nd-order predicate logic proof assistant. The motivation for its development is the author’s need for support in doing proofs within the so-called “Unifying Theories of Programming” paradigm (UTP). This requires support for 2nd-order logic, equational reasoning, and meets a desire to avoid re-encoding the theorems into some different logical form. It also provides proof transcripts whose style makes it easier to check their correctness.
Saothin is implemented in GHC 6.4 and wxHaskell 0.9.4, and has been tested on a range of Windows platforms (98/XP/Vista), and should work in principle on Linux/Mac OS X. A first public release of the software in some form is anticipated in early 2008.
During the last years, we have developed a number of tools for implementing different termination analyses and making declarative debugging techniques available for Term Rewriting Systems. We have also implemented a small subset of the Maude/OBJ languages with special emphasis on the use of simple programmable strategies for controlling program execution and new commands enabling powerful execution modes.
The tools have been developed at the Technical University of Valencia (UPV) as part of a number of research projects. The following people are (or have been) involved in the development of these tools: Beatriz Alarcon, Maria Alpuente, Demis Ballis (Universita di Udine), Santiago Escobar, Moreno Falaschi (Universita di Siena), Javier Garcia-Vivo, Raul Gutiérrez, JoséIborra, Salvador Lucas, Rafael Navarro, Eloy Romero, Pascal Sotin (Universitédu Rennes).
The previous work led to the following tools:
Standalone versions of the tool are available for different platforms (Linux, Mac OS X, Windows). A web-based interface (developed in HAppS) is also available:
Improve the existing tools in a number of different ways and investigate mechanisms (XML, .NET, …) to plug them to other client/server applications (e.g., compilers or complementary tools).
|Report by:||Carlos Areces|
|Participants:||Daniel Gorin, Guillaume Hoffmann|
“Hybrid Logic” is a loose term covering a number of logical systems living somewhere between modal and classical logic. For more information on these languages, see http://hylo.loria.fr
The Talaris group at Loria, Nancy, France (http://talaris.loria.fr) and the GLyC group at the Computer Science Department of the University of Buenos Aires, Argentina (http://www.glyc.dc.uba.ar/) are developing a suite of tools for automated reasoning for hybrid logics, available at http://hylo.loria.fr/intohylo/. Most of them are (successfully) written in Haskell. See HyLoRes (→6.5.7), HTab (→6.5.8), and HGen (→6.5.9).
|Report by:||Carlos Areces|
|Participants:||Daniel Gorin, Guillaume Hoffmann|
HyLoRes is an automated theorem prover for hybrid logics (→6.5.6) based on a resolution calculus. It is sound and complete for a very expressive (but undecidable) hybrid logic, and it implements termination strategies for certain important decidable fragments. The project started in 2002, and has been evolving since then. It is currently being extended to handle even more expressive logics (including, in particular, temporal logics). In the near future, we will investigate algorithms for model generation.
The source code is available. It is distributed under the terms of the Gnu GPL.
|Report by:||Carlos Areces|
|Participants:||Daniel Gorin, Guillaume Hoffmann|
HTab is an automated theorem prover for hybrid logics (→6.5.6) based on a tableau calculus. The goal is to implement a terminating tableau algorithm for the basic hybrid logic and for the basic logic extended with the universal modality. It is currently in early development. It will be tunable with various optimisations.
The source code is available. It is distributed under the terms of the Gnu GPL.
|Report by:||Carlos Areces|
|Participants:||Daniel Gorin, Guillaume Hoffmann|
HGen is a random CNF (conjunctive normal form) generator of formulas for different hybrid logics. It is highly parametrised to obtain tests of different complexity for the different languages. It has been extensively used in the development of HyLoRes (→6.5.7) and HTab (→6.5.8).
The source code is available. It is distributed under the terms of the Gnu GPL.
Coconut (COde CONstructing User Tool) is a special-purpose compiler project aiming to provide a coherent tool bench for the development of high-performance, high-assurance scientific computation, and to cover the full range of development activity from mathematical modelling to verification.
Development of the integrated tool bench is in (GHC-)Haskell, and currently proceeding from the bottom up. As code generation target, we are for now focusing on support for the Cell BE, and in particular the special-purpose SPU compute engines of which there are eight on a single Cell BE chip.
A type-indexed embedded DSL for declarative assembly language includes complex SIMD-isation patterns which are easier to encapsulate in the DSL and apply across 30 functions than they would be to implement even for a single function in C. To support rapid prototyping, Coconut includes instruction semantics sufficient to simulate SIMD instruction execution within Haskell. This significantly reduces the time spent on developing new patterns and exploring edge cases for mixed fixed/floating point computations by debugging and unit testing right in GHCi.
The central internal representation are “code graphs”, which are used to represent both data-flow graphs and control flow graphs, with separate levels of nesting for non-concurrent and concurrent control flow. For scheduling simple loop bodies programmed in the DSL, we use our Explicitly Staged Software Pipelining (ExSSP) algorithm on the data-flow code graph representation.
Our implementation of single-precision special functions (sin, sinh, asin, &hellp;, sqrt, cbrt, exp, log, lgamma, &hellp;) for the SPU is distributed as MASS in the Cell BE SDK 3.0 in both generated C (for inlining) and long vector functions scheduled by Coconut. In comparison with a state-of-the-art hand-tuned C implementation of these library functions using in-line assembly in the form of intrinsic functions and scheduled by the compiler spu-xlc, the Coconut-generated and -scheduled implementations are roughly four times faster; in many cases we know that our implementations are optimal.
We are currently making good progress on novel control flow patterns and scheduling algorithms to support them, a virtual machine model for multicore architectures, and verification strategies for SIMD-isation and multicore synchronisation, in an effort to bring the level of usability we have achieved with the DSL for SIMD-isation also to multicore parallelism.
|Report by:||Blazevic Mario|
|Status:||experimental, actively developed|
Streaming Component Combinators are an experiment at modelling dataflow architecture by using composable streaming components. All components are categorised into a small set of component types. A number of components can be composed into a compound component using a component combinator. For example, two transducer components can be composed together using a pipe operator into another transducer; one splitter and two transducers can be composed using an if combinator into a single compound transducer. Components are implemented as coroutines, and the data flow among them is synchronous.
There are two ways to use SCC: as an embedded language in Haskell, or as a set of commands in a command-line shell. The latter provides its own parser and a rudimentary type checker, but otherwise relies on the former to do the real work.
The original work was done in the OmniMark programming language. Haskell was the language of choice for the second implementation because its strong typing automatically makes the embedded language strongly typed, and because its purity forces the implementation to expose the underlying semantics.
The currently planned future work includes extending the set of primitive components and component combinators, and improving the type-checking and scripting abilities of the shell interface.
The latest stable version of SCC is available from Hackage.
|Report by:||Jennifer Streb|
|Participants:||Garrin Kimmell, Nicolas Frisby, Mark Snyder, Philip Weaver, Perry Alexander|
|Status:||beta, actively maintained|
Raskell is a Haskell-based analysis and interpretation environment for specifications written using the system-level design language Rosetta. The goal of Rosetta is to compose heterogeneous specifications into a single semantic environment. Rosetta provides modelling support for different design domains employing semantics and syntax appropriate for each. Therefore, individual specifications are written using semantics and vocabulary appropriate for their domains. Information is then composed across these domains by defining interactions between them.
The heart of Raskell is a collection of composable interpreters that support type checking, evaluation, and abstract interpretation of Rosetta specifications. Algebra combinators allow semantic algebras for the same constructs, but for different semantics, to be easily combined. This facilitates further reuse of semantic definitions. Comonads are used to structure a denotation of temporal Rosetta specifications. We are also investigating the use of comonads to capture other models of computation as supported by Rosetta domains. Using abstract interpretation, we can transform specifications between semantic domains without sacrificing soundness. This allows for analysis of interactions between two specifications written in different semantic domains. Raskell also includes a Parsec-based Rosetta parser that generates both recursive and non-recursive AST structures.
The Raskell environment is available for download at the links below. It is continually being updated, so we recommend checking back frequently for updates. To build the Rosetta parser and type checker, you must also install InterpreterLib (→5.4.7) and algc (a preprocessor for functorial boilerplate), both available at the third link listed below.
<alex at ittc.ku.edu>
|Report by:||Tiago Miguel Laureano Alves|
VooDooM supports understanding and re-engineering of VDM-SL specifications.
Understanding is accomplished through the extraction and derivation of different kinds of graphs such as type dependency, function dependency, and strongly connected components graphs. These graphs can be subject of both visualisation (by exporting into DOT format) and metrication (generating CSV or text report).
Re-engineering is supported through the application of transformation rules to the datatypes to obtain an equivalent relational representation. The relational representation can be exported as VDM-SL datatypes (inserted back into the original specification) and/or SQL table definitions (can be fed to a relational DBMS).
The first VooDooM prototype, supporting re-engineering, was developed in a student project by Tiago Alves and Paulo Silva. The prototype was further enhanced and continued as an open source project (http://voodoom.sourceforge.net/) in the context of the IKF-P project (Information Knowledge Fusion, http://ikf.sidereus.pt/) by Tiago Alves and finally in the context of a MSc thesis project.
It is planned that the re-engineering functionality of VooDooM will be replaced by the one that is being developed for the 2LT project (→5.6.5), which will add XML and Haskell generation.
VooDooM is available from http://voodoom.sourceforge.net/. The implementation of VooDooM makes ample use of strategic programming, using Strafunski, and is described in Strategic Term Rewriting and Its Application to a VDM-SL to SQL Conversion (Alves et al., Formal Methods 2005) and in the MSc thesis VooDooM: Support for understanding and re-engineering of VDM-SL specifications.
The survey “Realization of Natural-Language Interfaces Using Lazy Functional Programming” was published in ACM Computing Surveys, Volume 38, Issue 4, Article 11, in December 2006. It was in the Top 10 downloads from ACM Magazines and Surveys for February and March of 2007. The survey is now being used at a few universities as required reading for courses on computational linguistics.
The survey currently contains 168 references to relevant publications. The survey will be updated as more information becomes available. Please send information on recent publications in this area to <rfrost at cogeco.ca>
A draft of the survey is available at:
GenI is a surface realiser for Tree Adjoining Grammars. Surface realisation can be seen as the last stage in a natural language generation pipeline. GenI in particular takes an FB-LTAG grammar and an input semantics (a conjunction of first order terms), and produces the set of sentences associated to the input semantics by the grammar. It features a surface realisation library, several optimisations, batch generation mode, and a graphical debugger written in wxHaskell. It was developed within the TALARIS project and is free software licensed under the GNU GPL.
GenI has recently been updated to compile on GHC 6.8.2. It is also available on Hackage, and can be installed via cabal-install. We also now have a mailing list at http://websympa.loria.fr/wwsympa/info/geni-users.
The Haskell bioinformatics library supports working with nucleotide and protein sequences and associated data. File format support includes sequences in Fasta (with associated quality information), TwoBit, and PHD formats, BLAST XML output, and ACE alignment files.
The standard alignment algorithms (and some non-standard ones) are provided, as well as sequence indexing, complexity calculation, protein translation, etc.
The library is considered in development (meaning things will be added, some functionality may not be as complete or well documented as one would wish, and so on), but central parts should be fairly well documented and come with a QuickCheck test and benchmarking suite.
The library abstracts functionality that is used in a handful of applications, including:
Everything is GPLed and available as Darcs repos, at http://malde.org/~ketil/biohaskell/.
Inductive Programming (IP): The learning of general hypotheses from given data.
The project is (i) to use Haskell to examine what are the products of artificial-intelligence (AI)/data mining/machine-learning from a programming point of view, and (ii) to do data analysis with them.
IP 1.2 now contains estimators, from given weighted and unweighted data, to the Poisson and Geometric distributions over non-negative integer variables, and Student’s t-Distribution over continuous variables. The new (and the earlier) distributions may be used as components to the learners (estimators) of structured models such as unsupervised classifications (mixture models), classification- (decision-, regression-) trees and other function-models (regressions), mixed Bayesian networks, and segmentation models. IP’s modules have also been slightly reorganised.
A small prototype module of numerical/scientific functions, in Haskell, has been added to IP 1.2, to support the implementation of Student’s t-Distribution in the first instance.
Prototype code is available (GPL) at the URL below.
Planned are continuing extensions, applications to real data-sets, and comparisons against other learners.
lambdabot is an IRC robot with a plugin architecture, and persistent state support. Plugins include a Haskell evaluator, lambda calculus interpreter, unlambda interpreter, pointfree programming, dictd client, fortune cookies, Google search, online help, and more.
lambdabot 4.0 has been released, and is available from Hackage. Cale Gibbard has also kindly taken over maintainance of the bot.
At this moment, FreeArc is the best practical archiver in the world, providing the maximum speed/compression ratio (http://www.maximumcompression.com/data/summary_mf2.php).
Besides this, FreeArc provides a lot of features, including solid archives with fast updates, tunable compression algorithms, support for external compressors, automatic selection of compression algorithm depending on file type, data encryption and recovery, Win32 and Linux versions, tunable sorting and grouping of files, and FAR/Total Commander MultiArc support.
Such an ambitious goal was accomplished by bringing together Haskell and C++: speed-critical parts (compression, encryption) are written in C++, while everything else benefits from fast development and high reliability opportunities provided by Haskell. I should also note that compared to other archivers (traditionally written in C++) FreeArc provides smarter algorithms of archive management, which is again due to the high level of the Haskell programming paradigm.
Program sources are open, so you can borrow there:
Moreover, the program includes a few more modules that you may reuse in your program on BSD3 license:
The program is extensively commented in Russian, so for Russian-speaking Haskellers it may be an invaluable source for learning “practical Haskell”.
<Bulat.Ziganshin at gmail.com>
Roguestar is a science fiction themed roguelike (turn-based, chessboard-tiled, role playing) game written in Haskell. Roguestar uses OpenGL for graphics.
Roguestar 0.2 was announced on May 9 2008, and featured a simple combat system.
RSAGL, the RogueStar Animation and Graphics Library, includes a domain-specific monad for 3D modelling of arbitrary parametric surfaces, as well as an animation monad and arrow, which is more or less like YAMPA as a stack of arrow transformers. RSAGL was specifically designed for roguestar, but every effort has been made (including the license) to make it accessible to other projects that might benefit from it.
Roguestar is licensed under the Affero General Public License. RSAGL is licensed under a permissive license.
Well-Typed is a company which provides consulting services to users of Haskell. The company was incorporated in March 2008, with Björn Bringert, Duncan Coutts, and Ian Lynagh making up the founding partners.
We offer a broad range of services, from initial project design and training to development, maintenance, and performance tuning, and everything in between.
For more information, please take a look at our website, or drop us an e-mail at <info at well-typed.com>.
Clifford Beshers, David Fox, and Jeremy Shaw have formed SeeReason Partners, LLC. Our plan is to deliver services over the internet, using Haskell to build our applications whenever possible. We are looking at a variety of niche domains, including teaching primary mathematics skills and art appraisal, seeking to create sites that support social networking, learning, and digital asset management.
Often such projects employ large teams of developers and artists to hand-craft a look and feel. We want to exploit functional programming to generate content, or at least distribute the labour, more efficiently. We are exploring both AJAX technologies and Macromedia Flash. We are working on a Haskell to Flash compiler, as well as libraries to construct applications with Adobe’s Flex UI toolkit.
Formerly core members of the operating systems group at Linspire, Inc., we continue to maintain the tools for managing a Debian Linux distribution that we developed there. Source code for these tools can be found at our public source code repository http://src.seereason.org/. We plan to use these tools to provide current archives of Haskell related packages.
We can be reached at <(cliff,david,jeremy) at seereason.com> and on #haskell (→1.2) respectively as thetallguy, dsfox, and stepcut.
Ansemond LLC is at http://www.ansemond.com.
GMAG, the quantitative modelling group at Credit Suisse, has been using Haskell for various projects since the beginning of 2006, with the twin aims of improving the productivity of modellers and making it easier for other people within the bank to use GMAG models.
Many of GMAG’s models use Excel combined with C++ addin code to carry out complex numerical computations and to manipulate data structures. This combination allows modellers to combine the flexibility of Excel with the performance of compiled code, but there are significant drawbacks: Excel does not support higher-order functions and has a rather limited and idiosyncratic type system. It is also extremely difficult to make reusable components out of spreadsheets or subject them to meaningful version control.
Because Excel is (in the main) a side-effect free environment, functional programming is in many ways a natural fit, and we have been using Haskell in various ways to replace or augment the spreadsheet environment.
Our past projects include:
Our main current project is Paradise, a domain-specific language embedded in Haskell for implementing reusable components that can be compiled into multiple target forms. It has been under development for over 18 months now, and over that time the team working on it has grown to several people.
Paradise’s first target form was Excel spreadsheets, and that backend is now relatively mature; our main focus at the moment is generating .NET components which can be run standalone or plugged into a bank-wide system. In future we may target yet more diverse platforms such as web browsers.
Several modellers have now been exposed directly to Haskell by using Paradise, and they have generally picked it up fairly quickly. All new recruits are now introduced to Haskell as part of our internal training program.
CUFP 2006 talk about Credit Suisse:
Barclays Capital’s Quantitative Analytics group is using Haskell to develop an embedded domain-specific functional language (called “FPF”) which is used to specify exotic equity derivatives. These derivatives, which are naturally best described in terms of mathematical functions, and constructed compositionally, map well to being expressed in an embedded functional language. This language is now regularly being used by people who had no previous functional language experience.
The FPF description then serves as the core description of the trade structure, with different interpretations being used to perform operations from generating pricing instructions for the bank’s risk systems through to generating trade input forms and reports. The system thus automates the introduction of new products, replacing the previous approach of manually extending each subsystem to cope with a new trade type manually. It has dramatically reduced the turnaround time to make a new trade production-ready.
We have found Haskell to be a great language in which to implement an embedded functional language, and to be very effective as a language for rapid development.
We have been working on the system for a little over a year, and it has been in production use for most of that time, with new features and interpretations being added incrementally over that time.
We are hiring. Please contact Simon Frankau <Simon.Frankau at barcap.com> for more details.
Bluespec, Inc. provides tools for chip design, modelling, and verification (ASICs and FPGAs), inspired by Haskell and Term Rewriting Systems. Bluespec also uses Haskell to implement many of its tools (over 100K lines of Haskell). Bluespec’s products include synthesis, simulation, and other tools for Bluespec SystemVerilog (BSV).
Bluespec’s customers are using BSV’s high expressivity and full synthesisability to move into hardware many functions previously thought to be feasible only in software (complex algorithms, testbenches, and models).
BSV is based on the following semantic model: hardware behaviour is expressed using Rewrite Rules, and inter-module communication is expressed using Rule-based Interface Methods (which allow rules to be composed from fragments that span module boundaries). Because rules are atomic transactions, they eliminate a majority of the “timing errors” and “race conditions” that plague current hardware design using existing RTL languages like Verilog or VHDL. Rules also enable powerful reasoning about the functional correctness of systems. In other words, the concurrency model provided by rules is much more powerful and abstract than the low-level concurrency models provided by Verilog, VHDL, and SystemC.
BSV incorporates Haskell-style polymorphism and overloading (typeclasses) into SystemVerilog’s type system. BSV also treats modules, interfaces, rules, functions, etc. as first-class objects, permitting very powerful static elaboration (including recursion).
Bluespec tools synthesise source code into clocked synchronous hardware descriptions (in Verilog RTL) that can be simulated or further synthesised to netlists using industry-standard tools. This automates the generation of control logic to manage complex concurrent state update, a major source of errors in current design methodology, where this logic must be manually coded by the designer.
The powerful Haskell-like static elaboration in BSV is control adaptive, i.e., a parameterised design can elaborate into different microarchitectures which have different resource conflicts, but the typically complex control logic necessary to manage these conflicts is synthesised automatically based on the atomic transaction semantics — this is key to BSV’s high level of abstraction.
Bluespec participates in standards committees like IEEE P1800 (SystemVerilog) and IEEE P1666 (SystemC), where it tries to encourage adoption of the declarative programming ideas in BSV. One success has been the adoption of Bluespec’s proposals for “tagged unions (algebraic types) and pattern matching” in the current IEEE SystemVerilog standard.
A series of small illustrative examples: http://www.bluespec.com/wiki/SmallExamples
Bringing Declarative Programming into a Commercial Tool for Developing Integrated Circuits, Rishiyur Nikhil, Commercial Users of Functional Programming (CUFP), September 2006, slides of presentation at http://www.galois.com/cufp/
MIT courseware, “Complex Digital Systems”:
A fun example with lots of functional-programming features — BluDACu, a parameterised Bluespec hardware implementation of Sudoku:
Galois is an employee-owned software development company based in Beaverton, Oregon, U.S.A. Galois started in late 1999 with the stated purpose of using functional languages to solve industrial problems. These days, we emphasise the needs of our clients and their problem domains over the techniques, and the slogan of the Commercial Users of Functional Programming Workshop (see http://cufp.functionalprogramming.com/) exemplifies our approach: Functional programming as a means, not an end.
Galois develops software under contract, and every project (bar three) that we have ever done has used Haskell. The exceptions used ACL2, Poly-ML, SML-NJ, and OCaml, respectively, so functional programming languages and formal methods are clearly our “secret sauce”. We deliver applications and tools to clients in industry and the U.S. government. Some diverse examples: Cryptol, a domain-specific language for cryptography (with an interpreter and a compiler, with multiple targets, including FPGAs); a GUI debugger for a specialised microprocessor; a specialised, high assurance, cross-domain web and file server, and wiki for use in secure environments, and numerous smaller research projects that focus on taking cutting-edge ideas from the programming language and formal methods community and applying them to real world problems.
Web-based technologies are increasingly important to our clients, and we believe Haskell has a key role to play in the production of reliable, secure web software. The culture of correctness Haskell encourages is ideally suited to web programming, where issues of security, authentication, privacy, and protection of resources abound. In particular, Haskell’s type system makes possible strong static guarantees about access to resources, critical to building reliable web applications.
To help push further the adoption of Haskell in the domain of web programming, Galois released a suite of Haskell libraries, including:
Continuing our deep involvement in the Haskell community, Galois was happy to sponsor the two Haskell hackathons held in the past year, Hac 07 II, in Freiburg, Germany, and Hac4 in Gothenburg, Sweden. Galois also sponsored the second BarCamp Portland, held in early May 2008.
The School of Computer Science at the University of Nottingham has recently formed the Functional Programming Laboratory, a new research group focused on all aspects of the theory and practice of functional programming, together with related topics such as type theory, category theory, and quantum programming. The laboratory is lead jointly by Thorsten Altenkirch and Graham Hutton, and comprises Neil Ghani and Henrik Nilsson as academic members of staff, and currently 3 research assistants and 10 PhD students. The group’s weekly meetings, which started in 2004, are recorded on the frequently cited FP lunch blog (http://sneezy.cs.nott.ac.uk/fplunch/weblog/), giving a lively picture of ongoing research at Nottingham. With a total of 20 people in the area, we have a spectrum of interests:
summaries of recent talks.
In the afternoon there is an hour-long seminar. We are always keen on speakers in any related areas: do get in touch with Neil Ghani <nxg at cs.nott.ac.uk> if you would like to visit our group. See you there!
|Report by:||David Sabel|
Haskell has a call-by-name semantics, but all efficient implementations of Haskell use call-by-need evaluation, avoiding multiple evaluations of the same expression. We showed equivalence of call-by-name and call-by-need for a tiny deterministic letrec-calculus and also the correctness of an unrestricted copy-reduction in both calculi. We also proved that our method scales up to extended letrec-calculi with case and constructors, as well as letrec-calculi with a parallel-or operator.
We introduced the calculus FUNDIO, which proposes a non-standard way to combine lazy functional languages with Input/Output using non-deterministic constructs. Program equivalence is based on the operational semantics including the input/output behaviour of reduction sequences. We proved correctness of a considerable set of program transformations, in particular of several optimisations of evaluation, including strictness optimisations.
We also analysed program transformations used in GHC w.r.t. the FUNDIO semantics. After turning off a few transformations which are not FUNDIO-correct, we have achieved a FUNDIO-compatible modification of GHC, which is called HasFuse. This compiler correctly compiles Haskell programs which make use of unsafePerformIO in the common (safe) sense, since problematic optimisations are turned off or performed more restrictively. HasFuse can also compile programs which make use of unsafePerformIO in arbitrary contexts, where the semantics is given by FUNDIO. This allows to combine unsafePerformIO with monadic I/O in Haskell, where the result is reliable in the sense that I/O operations will not astonishingly be duplicated.
We investigated hardware descriptions in a functional language, i.e., Haskell programs extended by a parallel-or (por), where the non-deterministic operator por is implemented using Concurrent Haskell. As semantic model we use a call-by-need lambda calculus extended with letrec, case, constructors, and in particular with parallel-or. Ongoing research is devoted to prove correctness of circuit transformations, also including latches and combinational cycles, on the level of the high-level functional language descriptions.
In order to achieve more inference rules for equality in call-by-need lambda-calculi, Matthias Mann has established a soundness (w.r.t. contextual equivalence) proof for mutual similarity in a non-deterministic call-by-need lambda calculus. Moreover, we have shown that this approach scales up well to more expressive call-by-need non-deterministic lambda calculi, i.e., similarity can be used as a co-induction-based proof tool for establishing contextual preorder in a large class of untyped higher-order call-by-need calculi, in particular calculi with constructors, case, let, and non-deterministic choice.
For non-deterministic call-by-need calculi with letrec, known approaches to prove that simulation implies contextual equivalence are inapplicable. A recent result obtained in collaboration with Elena Machkasova is correctness of a variation of simulation for checking and proving contextual equivalence in an extended non-deterministic call-by-need lambda-calculus with letrec. The basic technique for the simulation as well as the correctness proof is called pre-evaluation, which computes a set of answers for every closed expression. If simulation succeeds in finite computation depth, then it is guaranteed to show contextual preorder of expressions. Further research is to adapt and extend the methods to an appropriately defined simulation, and to investigate an extension of the tools and methods to a combination of may- and must-convergence.
For modelling concurrent evaluation of functional programs, we investigated an extended call-by-need lambda-calculus with McCarthy’s non-deterministic amb-operator. We introduced an observational equivalence based on may- and must-termination w.r.t. a fair small step reduction semantics. We proved correctness of several program transformations, in particular partial evaluation using deterministic reductions. We developed some nontrivial proof techniques including a standardisation theorem and a weak form of finite simulation for proving program equivalences. With these tools it appears promising to show correctness of further program transformations. As an implementation model we developed an abstract machine for lazy evaluation of concurrent computations, implemented a variant of this machine in Haskell, and proved correctness of this machine with respect to the calculus extended with amb.
The algorithm for strictness analysis using abstract reduction has been implemented at least twice: Once by Nöcker in C for Concurrent Clean and on the other hand by Schütz in Haskell in 1994. In 2005 we proved correctness of the algorithm by using a call-by-need lambda-calculus as a semantic basis. Most implementations of strictness analysis use set constants like ⊤ (all expressions) or ⊥ (expressions that have no weak head normal form). We have shown that the subset relationship problem of co-inductively defined set constants is in DEXPTIME.
We are a group of staff and students with shared interests in functional programming. While our work is not limited to Haskell, in particular our interest in Erlang has been growing, Haskell provides a major focus and common language for teaching and research. We are seeking PhD students for funded research projects.
Our members pursue a variety of Haskell-related projects, many of which are reported in other sections of this report. Nik Sultana’s MSc thesis on formal verification of Haskell refactorings has recently been accepted. Chris Brown continues extending HaRe, the Haskell Refactorer (→4.4.5). Thomas Davie and Olaf Chitil continued development of the Haskell tracer Hat (→4.3.5) and its theoretical foundations, studying in particular the representation of functional values as finite maps. This summer a student will work with Olaf Chitil on improving Heat and making a public release. Heat is a deliberately simple IDE for teaching Haskell that has been used at Kent for three years. Keith Hanna is continuing work on Vital, a document-centred programming environment for Haskell, and on Pivotal, a GHC-based implementation of a similar environment. The Kent Systems Research Group is developing an occam compiler in Haskell (Tock). Neil Brown has created a Haskell library (“Communicating Haskell Processes”) based on the Communicating Sequential Processes calculus.
|Report by:||Andrew Butterfield|
|Participants:||Glenn Strong, Hugh Gibbons, Edsko de Vries|
The Foundations and Methods Group focuses on formal methods, category theory, and functional programming as the obvious implementation method. A sub-group focuses on the use, semantics, and development of functional languages, covering such areas as:
|Report by:||Christian Maeder|
|Participants:||Mihai Codescu, Dominik Lücke, Christoph Lüth, Christian Maeder, Achim Mahnke, Till Mossakowski, Lutz Schröder|
The activities of our group centre on formal methods and the Common Algebraic Specification Language (CASL).
We are using Haskell to develop the Heterogeneous tool set (Hets), which consists of parsers (using Parsec), static analysers, and proof tools for languages from the CASL family, such as CASL itself, HasCASL, CoCASL, CspCASL, and ModalCASL, and additionally OMDoc and Haskell. HasCASL is a general-purpose higher order language which is in particular suited for the specification and development of functional programs; Hets also contains a translation from an executable HasCASL subset to Haskell. There is a prototypical translation of a subset of Haskell to Isabelle/HOL and HOLCF.
Another project using Haskell is the Proof General Kit, which designs and implements a component-based framework for interactive theorem proving. The central middleware of the toolkit is implemented in Haskell. The project is the successor of the highly successful Emacs-based Proof General interface. It is a cooperation of David Aspinall from the University of Edinburgh and Christoph Lüth from Bremen.
The Coalgebraic Logic Satisfiability Solver CoLoSS is being implemented jointly at DFKI-Lab Bremen and at the Department of Computing, Imperial College London. The tool is generic over representations of the syntax and semantics of certain modal logics; it uses the Haskell class mechanism, including multi-parameter type classes with functional dependencies, extensively to handle the generic aspects.
Other extensions, libraries, and tools of the Glasgow Haskell Compiler that we exploit include concurrency, existential and dynamic types, Template Haskell, DriFT, Haddock (→4.2.1), Programmatica, Shellac (→5.8.3), HaXml (→5.10.3), and hxt (→5.10.2). We also maintain old sources such as bindings to uDrawGraph (formerly Davinci) and Tcl/TK, Shared Annotated Terms (ATerms), and haifa-lite.
A grant has provided us with 6 new quad-processor machines, which we are currently integrating into our existing Linux/Mosix cluster. When the integration is complete, we will be comparing the performance and behaviour of the Brooklyn College version of GpH (→3.1.2) and the SMP facility of the latest release of GHC (→2.1).
In the area of applications, we are working on two AI projects, three-dimensional tic-tac-toe (noughts and crosses), and an extended version of the Sudoku puzzle. We have also begun work on a parallel implementation of Skibinski’s quantum simulator, which we intend to use to study Grover’s fast search algorithm.
Murray Gross <magross at its.brooklyn.cuny.edu>
SCIEnce (http://www.symbolic-computation.org/) is a 3.2M euros, 5-year project that brings together major developers of symbolic computing systems, including Maple, GAP, MuPAD, and Kant with the world-leading Centre for Research in Symbolic Computation at RISC-Linz, Austria.
It makes essential use of functional programming technology in the form of the GRID-GUM functional programming system for the Grid, which is built on the Glasgow Haskell Compiler. The objective is not the technically infeasible goal of rewriting all these (and more) complex systems in Haskell. Rather, we use GRID-GUM to link components built from each of the symbolic systems to form a coherent heterogeneous whole. In this way, we hope to achieve what is currently a pious dream for conventional Grid technology, and obtain a significant user base both for GRID-GUM and for Haskell. We are, of course, taking full advantage of Haskell’s abilities to compose and link software components at a very high level of abstraction.
A fuller paper has appeared in the draft proceedings of the 2007 Symposium on Trends in Functional Programming (TFP 2007), New York, April 2007. A revised version is currently being prepared for submission to the post-symposium proceedings.
The Bay Area Functional Programmers group held their inaugural meeting in September, giving programmers in the San Francisco Bay using or interested in functional programming and functional programming languages such as Haskell, OCaml, SML, Scala, and Erlang, a place to meet, discuss, and learn together. We followed up the first informal meeting with an October talk by Alex Jacobson on HAppS (http://happs.org/). November saw David Pollak present the Scala-based web framework lift (http://liftweb.net). We will finish our series on functional web frameworks with Yariv Sadan presenting the Erlang framework ErlyWeb (http://erlyweb.org/) in December. Videos &slides are available for all the talks at the BayFP blog, as well as information on how to join the mailing list: http://bayfp.org/blog.