Last call generalised (was Re: tail recursion, (was Re: Need some help please))

b.i.mills@massey.ac.nz b.i.mills@massey.ac.nz
Fri, 29 Aug 2003 09:04:08 +1200


Yo,

> copyList (x:xs) = x : copyList xs
> 
> is surely not tail-recursive in the traditional sense, but I think
> that most Haskell programmers  take it for granted that it runs in
> constant stack space.

Not sure how this is a continuation of the safetail problem, 
and there are others who know a lot more about the specific 
guts of GHC and HUGS, but in that vein, I've got a general
comment on what I see as the principle going on here.

I brought up the same issue some time back about >>. That is 
in func = f x >> func, we have the problem that >> is a function
so func is not a last call. (David Bergman emphasised that it is
better viewed as "last call" than "tail recursion). But if this
was C, then the function is func = {f x ; func}, in which func
is then the last call. But the Haskell version of this is 
func = do { f x ; func}, which is just syntactic sugar for the
first option, so again, no "last call". Haskell compilers do
have optimisations related to this, but I just want to make
a generic comment.

In procedural programming, the idea of the last call is that
there is an operator ; that is often thought of as separating
statements. I would argue that it is rather joining commands,
with a transfer of state. That is in C, {cmdA ; cmdB ; } means
do cmdA with the current machine state, and then cmdB, passing
the resulting state from cmdA. Thus the Haskell interpretation
of ; as >> or >>= in monad terms is really just a natural 
statement of what is going on. Viva Haskell! But the point is
that this means that "last call" is actually relative to the
composition operation. So, traditional procedural last call
optimisation is just last-call-wrt-join, or something like that.

Thus, the situation for optimisation is 

f x = joinOp(x, f (g x))

were joinOp has the characteristic that it does not really
need to know what the second argument is in order to proceed
with the first part of the computation. Eg, using >>=, we can
finish with the LHS completely before going to the RHS, and
likewise with copyList (x:xs) = x : copyList xs, we can deal
with the (x :) completely, and never have to come back to it.

Again, I expect my comments are unorthodox, but I certainly
feel that there is a point here to be made. The situation 
for "last call" optimisation is a bit more general than 
simply that case of temporal-catenation of commands. It 
has much more to do with the logical relation between the
arguments to a function.

Regards,

Bruce.