The madness of implicit parameters: cured?

Wolfgang Lux wlux@uni-muenster.de
Mon, 4 Aug 2003 12:10:46 +0200


Ben Rudiak-Gould wrote:

> [...]
> The final straw was:
>
>     Prelude> let ?x = 1 in let g = ?x in let ?x = 2 in g
>     1
>     Prelude> let ?x = 1 in let g () = ?x in let ?x = 2 in g ()
>     2
>
> This is insanity. I can't possibly use a language feature which 
> behaves in
> such a non-orthogonal way.

Well, this is not insanity (only a little bit). In the first example, 
you
define a *value* g, i.e., g is bound to the value of ?x in its current
environment (though this value is not yet evaluated due to lazy 
evaluation),
whereas in the second example you define a function. The real insanity 
in
this point is that Haskell -- in contrast to Clean -- offers no way to
distinguish function bindings and value bindings and therefore you 
cannot
define nullary functions (except by some judicious use type signatures),
which is the heart of the monomorphism restriction mentioned by somebody
else on this list (and discussed regularly on this list :-).

> Now the interesting part: I think I've managed to fix these problems. 
> I'm
> afraid that my solution will turn out to be just as unimplementable as 
> my
> original file I/O proposal, and that's very likely in this case since 
> I'm
> far from grokking Haskell's type system. So I'm going to present my 
> idea
> and let the gurus on this list tell me where I went wrong. Here we go.
>
> [...]
>
> Now introduce the idea of "explicit named parameters" to Haskell. This
> requires three extensions: a new kind of abstraction, a new kind of
> application, and a way of representing the resulting types.

This looks quite similar to the labeled parameters in Objective Caml. 
However,
Objective Caml's solution seems to be more general. For instance, you 
can
pass labeled parameters in arbitrary order and you can have default 
value
for optional arguments.

> [...]
>
> Why are the semantics so much clearer? I think the fundamental problem
> with the existing semantics is the presence of an implicit parameter
> environment, from which values are scooped and plugged into functions 
> at
> hard-to-predict times.

If you keep the distinction between values and functions in mind, I do 
not
think that it is hard to predict when an implicit parameter is 
substituted
(if you are willing to accept the principal problem that it is hard to
predict which value is substituted with every kind of dynamic scoping 
:-).

> By substituting a notation which clearly means "I
> want this implicit parameter of this function bound to this value right
> now, and if you can't do it I want a static type error", we avoid this
> ambiguity.

IMHO, this problem were solved much easier by introducing a new syntax 
to
distinguish value and (nullary) function bindings, as was already 
repeatedly
asked for on this list in the context of the monomorphism restriction.
Personally, I'd suggest to use let x <- e in ... to introduce a value 
binding
(as it is quite similar to the bindings introduced in a do-statement) 
and use
let x = e to introduce a nullary function. (I prefer <- over := which 
John
Hughes and others suggested some time ago because we don't loose an 
operator
name). Thus, you example

   let ?x = 1 in let g = ?x in let ?x = 2 in g

will behave as you did expect, viz. evaluate to 2, whereas

   let ?x = 1 in let g <- ?x in let ?x = 2 in g

will return 1.

Regards
Wolfgang