# [Haskell-cafe] The mother of all functors/monads/categories

Max Bolingbroke batterseapower at hotmail.com
Sun Jun 27 13:28:01 EDT 2010

```On 27 June 2010 13:16, Sjoerd Visscher <sjoerd at w3future.com> wrote:
> Allowing Functor i also makes defining Thingy directly (without going though Monoidal) easy:
>
> newtype Thingy i a = Thingy { runThingy :: forall b. i (a -> b) -> i b }

Great stuff! I particularly like your <*> definition, because it
mirrors the composition law for Applicative nicely:
-- pure id <*> v = v                            -- Identity
-- pure (.) <*> u <*> v <*> w = u <*> (v <*> w) -- Composition
-- pure f <*> pure x = pure (f x)               -- Homomorphism
-- u <*> pure y = pure (\$ y) <*> u              -- Interchange

I think it's very important that we use Yoneda here. The reason is
that all the examples I have so far express some sort of normalisation
property:
1) Functors can be normalised by fusing together fmap compositions
into single fmaps
2) Categories can be normalised by reassocating (.) applications into
a canonical form, and using identity to discard compositions with id.
This is achieved in Wotsit by using the associativity and id property
of (.) for functions.
3) Monads can be normalised using the monad laws to reassociate (>>=)
rightwards and using left and right identity to discard parts of the
computation. The resulting form seems to correspond to the "monadic
normal form" from the literature.

Applicatives also have a normalisation property. I first saw this in
Malcolm Wallaces's Flatpack presentation at Fun In The Afternoon:
http://sneezy.cs.nott.ac.uk/fun/feb-10/. Unfortunately the slides are
not online. The normalisation rules are something like this, each
corresponds to an applicative law:

v                 ==> pure id <*> v              -- Identity
u <*> (v <*> w)   ==> pure (.) <*> u <*> v <*> w -- Composition
pure f <*> pure x ==> pure (f x)                 -- Homomorphism
u <*> pure y      ==> pure (\$ y) <*> u           -- Interchange

Some critical side conditions are needed here and there to ensure
termination. But the idea as I remember it was to rewrite all
applicative expressions to a form (pure f <*> u1 <*> ... <*> un) where
f is a pure function and each ui is a side-effecting computation which
is not of the form (v <*> w) or (pure f) -- i.e. it uses some
non-Applicative combinators to get its side effects.

Somehow the "mother of all applicatives" should encode this
normalisation property. If we didn't use Yoneda in our current
definition, then my intuition tells me that we wouldn't be able to get
guaranteed fusion adjacent pure f <*> pure x computations, so the
normalisation property corresponding to the modified Thingy would be
weaker.

I'm going to try automatically deriving a NBE algorithm for Moggi's
monadic metalanguage from the Codensity monad - with luck it will
correspond to the one-pass algorithm of Danvy. If this is possible it
will further strengthen the connection between "mothers" and NBE. By
repeating this exercise for Applicative and Thingy we should get a
beautiful algorithm for finding applicative normal forms.

Exciting stuff! Will report back if I make progress :-)

Max
```