Prelude.catch vs. Exception.catch

Simon Marlow simonmar@microsoft.com
Tue, 14 May 2002 11:57:47 +0100


> At 2002-05-14 02:58, Simon Marlow wrote:
>=20
> >I must admit I can't think of any compelling reasons for the change,
> >other than the fact that this is functionality that we don't=20
> have at the
> >moment, and therefore might be useful.  Opinions?
>=20
> I need a function that does this:
>=20
>     evaluate :: a -> IO a
>     evaluate _|_ =3D fail something
>     evaluate a =3D return a
>=20
> The idea is that you can take something that might be bottom=20
> and safely=20
> handle it in the IO monad, with the "bottomness" gone. This is what=20
> Exception.evaluate currently does, and I think that's correct.

Ok, I'll change the definition of evaluate to reflect the slightly
different semantics. =20

It turns out that the compiler "bug" is really just the compiler being a
bit loose with the IO monad - it freely translates the original
definition of evaluate using 'seq' into the slightly less strict version
by pushing the 'seq' through the state lambda of the IO monad (this only
happens for the IO monad, and strictly speaking it's a deviation from
the semantics but it has important performance benefits for IO code).

> I think the behaviour of Exception.catch is wrong. I think it should=20
> behave like this:
>=20
>     catch _|_ cc =3D _|_
>     catch (failure ex) cc =3D cc ex
>     catch (success a) cc =3D success a
>=20
> ...whereas it actually behaves like this:
>=20
>     catch _|_ cc =3D cc something
>     catch (failure ex) cc =3D cc ex
>     catch (success a) cc =3D success a

so your catch can be defined in terms of the current catch like so:

	catch' a h =3D a `seq` (catch a h)

what's the motivation for this change?

Cheers,
	Simon