[Haskell-cafe] Re: [Haskell] [Fwd: Re: Computer Language Shootout]

apfelmus at quantentunnel.de apfelmus at quantentunnel.de
Thu Jan 25 17:34:34 EST 2007


Neil Mitchell wrote:
> That would be very neat. Another neat trick would be generalising
> optimisations so that there are fewer and more independant passes,
> this would make it easier to understand (and is what I was working on
> for Yhc).

Well, it's the nature of repeatedly applying local reductions that you
will neither really know what's going nor truly understand how to
improve it. One particular example is the GHC rules mechanism. It's much
better than doing nothing, but it often fails to fuse yet another list.

I think that one can only achieve deterministic optimization by
carefully choosing and exploiting the type system of the intermediate
language. For instance, short cut fusion and strictness analysis can be
expressed as type inference. If you have an intermediate language where
things like boxing and forcing of thunks is explicit and typed, you have
a chance to eliminate such expensive constructs by type inference and
conventional inlining magic.

One point is that local optimization is hard to extend across the
boundaries imposed by recursion and the fixed point combinator. But I
can imagine that switching to a core language with non-totality (lifted
types) manifest in the types, like System F with a fixed point combinator

   fix :: Pointed a => (a -> a) -> a

is key to crossing this boundary.

> Yhc has intermediate code that is substantially more Haskell like, and
> with the command:
> 
> loadCore "file.ycr" >>= writeFile "file.html" . coreHtml
> 
> You can generate an active Html document that lets you explore the
> Core in a more interactive way - including hyperlinks for function
> names, syntax hilighting etc.
> 
> i.e: http://www.cs.york.ac.uk/fp/yhc/Roman.html
> 
> All of these things make playing with Yhc Core much more fun than
> playing with GHC Core. There is absolutely no reason you couldn't add
> all these things to GHC Core, then perhaps you'd save some time when
> it does come to the "examine core" level.

Wow, the core looks really cool! One look and you see it all. I would
even rename the local variables to single letters like a,b,c because the
cryptic numbers are quite hard to track. This is something coreHtml can
do. Also, I'd add more color, like making punctuation marks grey, but
that's a very personal taste.


Concerning the shootout, it deserves much laud for it is certainly not
an easy task to set it up for so many language and it keep running. But
I'm not very fond of the benchmarks themselves.

The goal seems to be to measure how fast different languages can execute
a hard wired algorithm which specifies memory management and data
layout. While this is a valid goal, I don't think it is a worthy one. It
simply does not get to the interesting facts, namely how fast the
natural algorithms for each language are. It just compares highly
artificial algorithm implementations.

Random examples are the nsieves ("count the prime numbers from 2 to M
[...] create a sequence of M boolean flags") and the k-nucleotide
("define a procedure/function to update a hashtable of k-nucleotide
keys") benchmarks. Both boolean flags and hash tables are completely
alien to Haskell. The former would be naturally implemented by a list of
primes, the latter naturally with a generalized trie.

Of course, disallowing unboxed arrays will certainly move Haskell down
the ranking. But what do we gain from unlimited array necromancy? Do we
get a fair account on how fast and short day to day Haskell really is?
And how to tweak the compilers to make day to day Haskell an even more
pleasant experience? I think not. (Do not misunderstand me, ByteStrings
are fine for it is the purely functional style that counts).

And sorry, but using the number of gzipped bytes for comparing the code
length is just ridiculous.


Regards,
apfelmus



More information about the Haskell-Cafe mailing list