'any' and 'all' compared with the rest of the Report

Jan-Willem Maessen jmaessen@mit.edu
Fri, 26 Jan 2001 12:26:17 -0500


Marko Schuetz <MarkoSchuetz@web.de> starts an explanation of bottom
vs. error with an assumption which I think is dangerous:
> Assuming we want 
> 
> \forall s : bottom \le s
> 
> including s = error, then we should also have error \not\le
> bottom.

He uses this, and an argument based on currying, to show that strict
functions ought to force their arguments left to right.

This seems to me to be another instance of an age-old debate in
programming language design/semantics: If we can in practice observe
evaluation order, should we therefore specify that evaluation order?
This is a debate that's raged for quite a while among users of
imperative languages.  Some languages (Scheme comes to mind) very
clearly state that this behavior is left unspecified.

Fortunately, for the Haskell programmer the debate is considerably
simpler, as only the behavior of "wrong" programs is affected.  I am
more than willing to be a little ambiguous about the error behavior of
programs, by considering "bottom" and "error" to be one and the same.
As I noted in my last message, this allows some optimizations which
would otherwise not be allowed.  Here's the worker-wrapper
optimization at work; I'll use explicit unboxing to make the
evaluation clear.

forever x = forever x  -- throughout.

addToForever :: Int -> Int
addToForever b = forever () + b

main = print (addToForever (error "Bottom is naughty!"))

==
-- expanding the definition of +

addToForever b =
  case forever () of
  I# a# ->
    case b of 
    I# b# -> a# +# b#

==
-- At this point strictness analysis reveals that addToForever
-- is strict in its argument.  As a result, we perform the worker-
-- wrapper transformation:

addToForever b = 
  case b of
  I# b# -> addToForever_worker b#

addToForever_worker b# =
  let b = I# b# 
  in  case forever () of
      I# a# ->
        case b of 
        I# b# -> a# +# b#

==
-- The semantics have changed---b will now be evaluated before
-- forever(). 

I've experimented with versions of Haskell where order of evaluation
did matter.  It was a giant albatross around our neck---there is
simply no way to cleanly optimize programs in such a setting without
doing things like termination analysis to justify the nice old
transformations.  If you rely on precise results from an analysis to
enable transformation, you all-too-frequently miss obvious
opportunities.  For this reason I *very explicitly* chose not to make
such distinctions in my work.

In general, making too many semantic distinctions weakens the power of
algebraic semantics.  Two levels---bottom and everything else---seems
about the limit of acceptable complexity.  If you look at the work on
free theorems you quickly discover that even having bottom in the
language makes life a good deal more difficult, and really we'd like
to have completely flat domains.

I'd go as far as saying that it also gives us some prayer of
explaining our algebraic semantics to the programmer.  A complex
algebra becomes too bulky to reason about when things act stragely.
Bottom is making things hard enough.

-Jan-Willem Maessen

PS - Again, we don't try to recover from errors.  This is where the
comparison with IEEE arithmetic breaks down: NaNs are specifically
designed so you can _test_ for them and take action.  I'll also point
out that infinities are _not_ exceptional values; they're semantically
"at the same level" as regular floats---so the following comparison is
a bit disingenuous:
> If error \equiv bottom and you extend, say, Int with NaNs, how do you
> implement arithmetic such that Infinity + Infinity \equiv Infinity and
> Infinity/Infinity \equiv Invalid Operation?