Haskell puzzles!
Daan Leijen
daan@cs.uu.nl
Tue, 19 Mar 2002 15:55:58 +0100
Hi all,
I have put together some interesting Haskell puzzles!
Not many people were able to solve all three puzzles
so don't be discouraged you don't know all the answers.
Have fun,
Daan.
-----------------------------------------------------
- All three puzzles are "Haskell 98"; you can solve
them using the Haskell 98 manual and they should be
implementation independent.
- "equal" means that two evaluated expression denote
the same semantical value; i.e. when typed on the
Hugs prompt, I will see the same answer.
- All the answers are at the end of this mail.
-----------------------------------------------------
1) Are e1 and e2 equal?
> f (x:xs) y = x
> g (x:xs) = \y -> x
>
> e1 = seq (f []) 1
> e2 = seq (g []) 1
-----------------------------------------------------
2) Are e1 and e2 equal?
Is one definition preferred over the other?
(And what about "h x = nfib 30" ?)
> f = const (nfib 30)
> g = \x -> nfib 30
>
> e1 = f 1 + f 2
> e2 = g 1 + g 2
-----------------------------------------------------
3) Are e1 and e2 equal?
And what if I typed them on the Hugs prompt?
> f = \x -> x + x
> g x = x + x
>
> e1 = f 1.0 + f 2.0
> e2 = g 1.0 + g 2.0
-----------------------------------------------------
Answers at the end of this email.....
Answers:
-----------------------------------------------------
1)
answer:
"e1" equals the value 1, while "e2" is undefined.
explanation:
This is due to the translation of pattern matching.
The functions are de sugared into:
> f x' y = case x' of (x:xs) -> x
> g x' = case x' of (x:xs) -> (\y -> x)
Now, the expression "f []" is a partial application
and thus in weak head normal form while "g []" is
evaluated and the pattern matching fails.
opinion:
I think that the pattern matching translation rules
are wrong here and we shouldn't push the "case" through
the lambda's. However, the current rules are potentially
more efficient since all arguments can be collected
before matching.
-----------------------------------------------------
2)
answer:
Yes, "e1" and "e2" are equal. However, the "f" definition
is preferred since it potentially shares the computation
of "nfib 30". ("e1" will be almost twice as fast as "e2"
on most systems).
explanation:
The Haskell language is only defined to be "non-strict".
This means that there are no guarantees about sharing. However,
all current Haskell systems use lazy semantics. This means
that both "f" and "g" are caf's. When "g" is evaluated, the
expression "\x -> nfib 30" is shared and on each call "nfib 30"
is re-evaluated. When "f" is evaluated, the application node
"const (nfib 30)" is shared and once it is applied it will be updated
and "nfib 30" is only evaluated once. Smart compilers like GHC
however, will inline definitions like "g" and after common expression
elimination we also end up with an efficient version. Same would
hold for a full laziness transformation.
Note that "h x = nfib 30" behaves exactly like "g" except that it's
type is more general due to the monomorphism restriction and
defaulting:
> f,g :: a -> Integer
> h :: Num b => a -> b
opinion:
I believe that the operational distinction between both definitions
is fundamentally important and should be expressible by the user.
-----------------------------------------------------
3)
answer:
Yes, "e1" and "e2" are equal (6.0). However, when "e1" and "e2"
are used outside the scope of "f" and "g", for example when typed
on the Hugs prompt, "e1" is not well-typed.
explanation:
The type of "g" is "Num a => a -> a". One would expect the same
type for "f" but unfortunately, the monomorphism restriction prevents
this and defaulting tries to resolve the instance. Based on the usage
of "f", it will default to "Double" and we get the type "Double -> Double"
for "f". However, if we would use it from the Hugs prompt, there is
no usage site of "f" and defaulting will resort to the "default defaulting"
rules and set the type of "f" to "Integer -> Integer" and the expression
"f 1.0" becomes ill typed -- that is, unless I use a "default" declaration
in the module, as in:
> default (Double)
opinion:
Clearly, as puzzle 2 shows, we want to have a distinction between
shared values and functions. So, we need at least to be warned when
the compiler would add a dictionary argument. I would say that an
"overloading warning" is more appropriate than a "monomorphism restriction".
Anyway, the real culprit here is the horrible defaulting mechanism.
I believe that defaulting has no place in Haskell modules and should
probably only be used on expressions in a top-level interpreter.