Superclass inference (was: termination for FDs and ATs)

Simon Peyton-Jones simonpj at
Fri May 5 04:25:10 EDT 2006

| Superclass implication is reversed when performing type inference.
| In the same way that instance reduction is reserved.
| Here's the example again.
|  class C a
|  class F a where
|    type T a
|  instance F [a] where
|    type T [a] = [[[a]]]
|  class C (T a) => D a
|          ^^^^^
|     type function appears in superclass context
|  instance D [a] => C [[a]] -- satisfies Ross Paterson's Termination
| Consider
|                   D [a]
| -->_superclass    C (T [a]), D [a]
| -->_type function C [[[a]]], D [a]
| -->_instance      D [[a]], D [a]
| and so on
| Of course, you'll argue that you apply some other inference methods.

Probably!  Let me explain what GHC does today, and see if that helps.
GHC tries to solve the following problem:

	Input: a set of "given" constraints G
		a set of "wanted" constraints W

	Output: evidence that Y can be deduced from G+instance decls

During solving, GHC maintain two sets: the "given" set GG and the
"wanted" set WW. 

XX is maintained closed under superclasses; that is, when adding a
constraint (C t) to GG, I add all of C's superclasses.  GG is
initialised from G, closing under superclasses.

WW is initialised to W, but is *not* closed under superclasses.

Then we repeatedly pick a member of WW
	a) if it's in GG, we are done
	b) otherwise use an instance decl to simplify it, 
		putting the new constraints back in WW
	c) if neither applies, error.

In the pure inference case, the algorithm is slightly different:
	Input: a set of "wanted" constraints W
	Output: a minimal set of "wanted" constraints, G
		  plus evidence for how to deduce W from G

GG starts empty.  The algorithm is the same as before, but in case (c)
we add the constraint to GG (plus its superclasses).  At the end,
extract G from GG; we want all the things that were not the
superclass-closure extras.


The point of explaining this is to say that in your example, (D [a]) is
part of Y, so GHC won't add D [a]'s superclasses in the first place, and
the problem you describe doesn't arise.  I don't know whether that is
luck, good judgement, or perhaps wrong.  


In the presence of ATs, I think that when adding a member to XX, or to
YY, we need to normalise the types wrt AT rewrites (a confluent
terminating process) first.  But that should do it.  (Of course, I have
proved nothing.)


More information about the Haskell-prime mailing list