Monomorphism, monomorphism...

Hannah Schroeter uk1o@rz.uni-karlsruhe.de
Mon, 8 Oct 2001 11:35:48 +0200


Hello!

On Sun, Oct 07, 2001 at 11:29:09AM +0000, Marcin 'Qrczak' Kowalczyk wrote:
> [...]

> >    Shouldn't the compiler be able to limit the recomputations to
> >    one per instance of the class?

> It would require very unusual implementation. Currently the compiler
> doesn't need to build dictionaries indexed by types at runtime and
> even doesn't need to recognize which types are "the same" at runtime.

Now, with the typical dictionary implementation of type classes,
this wouldn't really be too difficult.

foo :: Num a => a
foo = 2*3*5*7

creates a hash map, where the key is the dictionary argument and
the value is the deferred (call by need) calculation of 2*3*5*7 for
that particular Num instance, so it could be transformed to sth like

createHashMap :: (Hashable key) => value -> IO HashMap key value
createHashMap defaultValue = ...

gethash :: HashMap key value -> key -> value

puthash :: HashMap key value -> key -> value -> IO ()

foo_internal :: HashMap ...
foo_internal = createHashMap foo_impl

foo :: NumDict a -> a
foo dict = (gethash foo_internal dict) dict

-- foo_impl gets called only once per dict, as it replaces its entry
-- in foo_internal by the lazy value of foo for that particular
-- instance and returns that.
foo_impl :: NumDict a -> a
foo_impl dict =
	let
		value = (*) dict (fromInteger dict 2) ...
		hashSlot _ = value
	in unsafePerformIO $ do
		puthash foo_internal dict hashSlot
		return value

Yes, it creates some burden, but for fast hash maps, that shouldn't
be completely unfeasible. Other programming languages use hash
lookups for every function call and get performance by dynamic
code generation techniques or other means.

And "we" could still specialize cases where foo is applied in
known type contexts (e.g. when the context of one particular usage
definitely suggests "Integer", the compiler could even statically
evaluate the whole of foo).

> >    I guess I'm biased here by my knowledge of templates in C++,
> >    which can be used to implement something very similar to type
> >    classes in Haskell.

> C++ templates require having source available in each place a template
> is used.

No. The standard specifies "export"ed templates. It's only that
nearly no implementation supports that part of the ISO standard.

With exported templates, you specify only signatures in the headers,
together with appropriate placement of the keyword "export".
The implementation is in .cc files, just like for non-template
methods.

> I agree that it would be useful for Haskell compilers to
> compile some instances that way when optimization is turned on,
> but the traditional way to implement classes uses a single compiled
> version which is passed various dictionaries at runtime.

Isn't there some experimental thing that implements type classes
by complete specialization of all code that uses polymorphic
values?

> It works because Haskell's rules of overloading are defined more
> "semantically". C++ templates are closer to syntactic macros.

Yep. C++ templates are a castrated version of Lisp macros :-)
But at least, C++ templates have pattern matching ([partial]
specialization!) and are Turing complete, albeit in a very
strange way of expression.

> The C++ way wouldn't work for Haskell in general because sometimes
> the depth of instances created is not bounded statically,

Can you name such a case in standard H'98?

> unlike C++
> templates which are instantiated at compile time. And it doesn't
> work with extensions like existential quantification, where types
> which differ in instance dictionaries can be packed into values of
> a single type, so the instance to use *must* be chosen dynamically
> (data flow can't be statically determined in general).

Right. A collection of existentially typed values is similar to
abstract class Base ... class Derived1 : Base ... class Derived2 : Base,
Collection[Base] with values of the different derived classes in
typical OO languages.

> It can be only an optimization (and perhaps it's sometimes more code
> bloat than optimization), it doesn't fit as the primary model IMHO.

Seems so, at least when language extensions come into play.

> [...]

Kind regards,

Hannah.