Fast Haskell Parser
John D. Earle
JohnDEarle at cox.net
Thu Mar 11 10:33:02 EST 2010
Thank you very much for the link Thomas. It does give some clarity on the
matter. None of the combinator parsers performed well including Attoparsec.
This is apparent when you look at the numbers next to the bar graph. C took
roughly half as long as Attoparsec. Attoparsec, however, is sufficiently
close to be a candidate. The others are outside the ball park. The numbers
do not reflect how Happy would perform, however. The blog did mention Happy
in passing as something to avoid for reasons unknown.
Happy seems to be the best choice when performance and simplicity is needed.
When performance at all costs is needed C is the best choice. My goal is not
to write something in C, however. It is to be written in Haskell. So what
about the performance of hand rolled Haskell as opposed to hand rolled C?
From: "Thomas Schilling" <nominolo at googlemail.com>
Sent: 11 Thursday March 2010 0748
To: "John D. Earle" <JohnDEarle at cox.net>
Cc: "cvs-ghc" <cvs-ghc at haskell.org>
Subject: Re: Fast Haskell Parser
> No, it's not lazy vs. strict. Parsec's "try" switches the default
> LL(1) parser to an LL(N) parser. Happy is always LALR(1), which is
> somewhere in the middle. ReadP is always LL(N) but is efficient by
> using a breadth-first search instead of depth-first search. The
> efficiency of ReadP depends on how ambiguous the grammar is, i.e., how
> wide the search tree will be on average. You might look into
> "attoparsec" on hackage -- apparently it can be 100x faster than
> : http://www.serpentine.com/blog/
> On 11 March 2010 14:27, John D. Earle <JohnDEarle at cox.net> wrote:
>> Simon Marlow: Personally I find Parsec and the other parser combinator
>> libraries quite difficult to use when it comes to deciding where to put
>> 'try'; ReadP is the exception here, because it does general backtracking
>> doesn't make you decide where to use 'try'.
>> John D. Earle: I bet this has something to do with lazy verses strict
>> evaluation. ReadP alleviates the problem by doing the work for you. Lazy
>> evaluation fits nicely with the aims of high level language design. Lazy
>> evaluation gives more opportunities for a compiler to optimize the code
>> using non-domain specific optimizations. At some point in the compilation
>> process lazy becomes strict. When designing domain specific optimizations
>> you are working on the low level details at around the lazy-strict
>> transition boundary, however. Part of the problem appears to be that I do
>> not understand how to selectively carry out strict evaluation in Haskell.
>> Lazy evaluation is the default.
>> Cvs-ghc mailing list
>> Cvs-ghc at haskell.org
> Push the envelope. Watch it bend.
More information about the Cvs-ghc