Jacques Carette carette at mcmaster.ca
Fri Jan 28 17:48:13 EST 2005

[I have now subscribed to haskell-cafe]

Henning Thielemann <lemming at henning-thielemann.de> wrote:
> This seems to be related to what I wrote yesterday

Yes, very much.  Except that rather than trying to tell mathematicians that they are *wrong*, I am trying to see which
of their notations I can explain (in a typed way).  There will be some 'leftovers', where the notation is simply bad.

> I've collected some examples of abuse and bad mathematical notation:
>  http://www.math.uni-bremen.de/~thielema/Research/notation.pdf

Some of what you point out there is bad notation.  Other bits point to some misunderstanding of the issues.

> f(x) \in L(\R)
>    where f \in L(\R) is meant
>
> F(x) = \int f(x) \dif x
>    where x shouldn't be visible outside the integral

First, mathematicians like to write f(x) to indicate clearly that they are denoting a function.  This is equivalent to
writing down (f \in -> ) with domain/range omitted.

Second, every mathematician knows that \int f(x) \dif x == \int f(y) \dif y (ie alpha conversion), so that combined
with the previous convention, there is no confusion in writing F(x) = \int f(x) \dif x.  It is just as well-defined as
(\x.x x) (\x.x x)
which requires alpha-conversion too for proper understanding.

You also write
> O(n)
>    which should be O(\n -> n) (a remark by Simon Thompson in
>                                The Craft of Functional Programming)
but the only reason for this is that computer scientists don't like open terms.  Since the argument to O must be a
univariate function with range the Reals, then whatever is written there must *denote* such a function.  The term
n'' can only denote only one such function, \n -> n.  So the mathematician's notation is in fact much more pleasing.

However, you have to remember one crucial fact: a set of typographical symbols are meant to *denote* a value, they are
not values in themselves.  So there is always a function around which is the denotes'' function that is implicitly
this.

What is definitely confusing is the use of = with O notation.  The denotation there is much more complex - and borders
on incorrect.

> a < b < c
>    which is a short-cut of a < b \land b < c

That is plain confusion between the concepts of denotation'' and value''.  Where < is a denotation of a binary
function from bool x bool -> bool, _ < _ < _ is a mixfix denotation of a constraint, which could be denoted in a
long-winded fashion by
p a b c = a<b and b<c
but more accurately by
p a c = \b -> b \in )a,c(
where I am using mathematical notation for the body above.

On your notation.pdf'' (link above), you have some other mis-interpretations.  On p.10 you seem to think that
Mathematica is a lazy language, when it is in fact an eager language.  So your interpretation does not make sense''.
Not that your observation is incorrect.  In Maple, there are two functions, eval(expr, x=pt) and subs(x=pt, expr)
which do similar'' things.  But subs is pure textual substitution (ie the CS thing), whereas 2-argument eval means
"evaluate the function that \x -> expr denotes at the point pt" (ie the math thing).  The interesting thing is that
"the function that \x -> expr denotes" is allowed to remove (removeable) singularities in its denotation'' map.
However,
> subs(x=2,'diff'(ln(x),x)) ;
diff(ln(2),2)
where the '' quotes mean to delay evaluation of the underlying function.  On the other hand
> eval('diff'(ln(x),x),x=2) ;
eval('diff'(ln(x),x),x=2)
because it makes no sense to evaluate an open term which introduces a (temporary) binding for one of its variables.

Note that without access to terms, it is not possible to write a function like diff (or derive as you phrase it, or D
as Mathematica calls it).  Mathematician's diff looks like it is has signature diff: function -> function, but it in
fact is treated more often as having signature diff: function_denotation -> function_denotation.  But you can see a
post by Oleg in December on the haskell list how it is possible (with type classes) in Haskell to automatically pair
up function and function_denotation.

You also seem to assume that set theory is the only axiomatization of mathematics that counts (on p.31).  I do not see
functions A -> B as somehow being subsets of powerset(A x B).  That to me is one possible 'implementation' of
functions.  This identification is just as faulty as the one you point out on p.14 of the naturals not really''
being a subset of the rationals.  In both cases, there is a natural embedding taking place, but it is not the
identity.

You also have the signature of a number of functions not-quite-right.  indefinite'' integration does not map
functions to functions, but functions to equivalence classes of functions.  Fourier transforms (and other integral
transforms) map into functionals, not functions.

I hope your audience (for things like slide 35) was made of computer scientists - it is so amazingly condescending to
thousands of mathematicians, it is amazing you would not get immediately booted out!

On p.37, you have polynomials backwards.  Polynomials are formal objects, but they (uniquely) denote a function.  So
polynomials CANNOT be evaluated, but they denote a unique function which can.

On p.51 where you speak of hidden'' quantifiers, you yourself omit the thereexists quantifiers that are implicit on
the last 2 lines -- why?

The above was meant to be constructive -- I apologize if it has come across otherwise.  This is an under-studied area,
and you should continue to look at it.  But you should try a little harder to not assume that thousands of
mathematicians for a couple of hundred years (at least) are that easily wrong''.  Redundant, sub-optimal, sure.

Jacques