# [Haskell-cafe] help understanding lazy evaluation

Stefan O'Rear stefanor at cox.net
Thu Aug 23 04:23:52 EDT 2007

```On Thu, Aug 23, 2007 at 10:00:00AM +0200, Xavier Noria wrote:
> You people rock. Responses were really helpful and I understand how the
> computation goes now.
>
> I see I need to reprogram my eyes to expect lazy evaluation in places where
> I am used to one-shot results. I see lazy evaluation is all around in
> Haskell builtins.
>
> From a formal point of view how does this work?

As is usual for mathematical things, there are many equivalent
definitions.  My two favorites are:

1. Normal order reduction

In the λ-calculus, lazy evaluation can be defined as the (unique up to
always giving the same answer) evaluation method, which, if *any*
evaluation order would finish, works.  This is a rather subtle theorem,
not a trivial definition, so it could be hard to use.

3. Closed partial ordering

Add a value ⊥ to every type, recursively (so Bool contains ⊥, True,
False; while [()] contains ⊥, [], (⊥:⊥), (():⊥), (⊥:[]), ...)

The semantics for case contain a new rule:

case ⊥ of { ... }  ===>  ⊥

Define a partial order ≤ by:

⊥ ≤ x                  for all x
(C a b c) ≤ (D e f g)  if C = D and a ≤ e, b ≤ f, c ≤ g

It is easily verified that this is a closed partial order with bottom
element ⊥.

It can be easily verified that all functions definable using these
operations are monotonic (a ≤ b implies f a ≤ f b).

Now, define:

let x = y in z

as:

(λx. z) (fix (λx. y))

where fix :: (α -> α) -> α is the operation which returns a minimal
fixed point of its argument, which by Kleene's Fixed-point Theorem is
guaranteed to exist and be unique.

In many ways this is much less obvious than even the previous one, but
it happens to be very useful.  Strictness analysis is trivial to
understand once you grok the application of Kleene's Theorem to lazy
programs.

> The specifications in
>
>   http://www.haskell.org/onlinereport/standard-prelude.html
>
> include the lazy aspect of the definitions albeit they do not require that
> exact implementation, right?

Indeed, you've caught on an important technical distinction.

Lazy:  Always evaluating left-outermost-first.

Non-strict: Behaving as if it were lazy.

Haskell's semantics are non-strict in nature, but there is no
requirement for laziness.  All optimizing haskell compilers are
non-lazy.

> On the other hand, where does Haskell 98 say == is lazy on lists?

Chapter 8, Standard Prelude.  The code, when intepreted using a
non-strict semantics, implements a non-strict ==.

Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
Url : http://www.haskell.org/pipermail/haskell-cafe/attachments/20070823/fea2326c/attachment.bin
```

More information about the Haskell-Cafe mailing list