Difference between revisions of "User:Gwern/kenn"

From HaskellWiki
Jump to navigation Jump to search
(updated tagsoup script)
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
<blockquote>Classical first-order logic has the pleasant property that a formula can be reduced to an elegant ''implicative normal form'' through a series of syntactic simplifications. Using these transformations as a vehicle, this article demonstrates how to use Haskell&rsquo;s type system, specifically a variation on Swierstra&rsquo;s &ldquo;Data Types à la Carte&rdquo; method, to enforce the structural correctness property that these constructors are, in fact, eliminated by each stage of the transformation.
+
<blockquote>Classical first-order logic has the pleasant property that a formula can be reduced to an elegant ''implicative normal form'' through a series of syntactic simplifications. Using these transformations as a vehicle, this article demonstrates how to use Haskell�s type system, specifically a variation on Swierstra�s �Data Types à la Carte� method, to enforce the structural correctness property that these constructors are, in fact, eliminated by each stage of the transformation.
 
</blockquote>
 
</blockquote>
  +
<blockquote><haskell class="haskell">{-# LANGUAGE RankNTypes,TypeOperators,PatternSignatures #-}
  +
{-# LANGUAGE UndecidableInstances,IncoherentInstances #-}
  +
{-# LANGUAGE MultiParamTypeClasses,TypeSynonymInstances #-}
  +
{-# LANGUAGE FlexibleContexts,FlexibleInstances #-}
   
  +
import Text.PrettyPrint.HughesPJ
 
  +
import Control.Monad.State
  +
import Prelude hiding (or,and,not)</haskell>
  +
{LANGUAGE pragma and module imports}
  +
</blockquote>
 
== First-Order Logic ==
 
== First-Order Logic ==
   
Consider the optimistic statement &ldquo;Every person has a heart.&rdquo; If we were asked to write this formally in a logic or philosophy class, we might write the following formula of classical first-order logic:
+
Consider the optimistic statement �Every person has a heart. If we were asked to write this formally in a logic or philosophy class, we might write the following formula of classical first-order logic:
   
<math>\forall p.\, Person(p) \Rightarrow \exists h.\, Heart(h) \wedge Has(p,h) </math>
+
<math>\forall p.\, Person(p) \Rightarrow \exists h.\, Heart(h) \wedge Has(p,h) \\ </math>
   
If asked to write the same property for testing by QuickCheck quickcheck , we might instead produce this code:
+
If asked to write the same property for testing by QuickCheck {quickcheck}, we might instead produce this code:
   
<pre> heartFact :: Person -&gt; Bool
+
<haskell class="haskell"> heartFact :: Person -> Bool
 
heartFact p = has p (heart p)
 
heartFact p = has p (heart p)
where heart :: Person -&gt; Heart
+
where heart :: Person -> Heart
...</pre>
+
...</haskell>
These look rather different. Ignoring how some of the predicates moved into our types, there are two other transformations involved. First, the universally quantified <math>p</math> has been made a parameter, essentially making it a free variable of the formula. Second, the existentially quantified <math>h</math> has been replaced by a function heart that, for any person, returns their heart. How did we know to encode things this way? We have performed first-order quantifier elimination in our heads!
+
These look rather different. Ignoring how some of the predicates moved into our types, there are two other transformations involved. First, the universally quantified <math>p</math> has been made a parameter, essentially making it a free variable of the formula. Second, the existentially quantified <math>h</math> has been replaced by a function |heart| that, for any person, returns their heart. How did we know to encode things this way? We have performed first-order quantifier elimination in our heads!
   
This idea has an elegant instantiation for classical first-order logic which (along with some other simple transformations) yields a useful normal form for any formula. This article is a tour of the normalization process, implemented in Haskell, using a number of Haskell programming tricks. We will begin with just a couple of formal definitions, but quickly move on to &ldquo;all code, all the time.&rdquo;
+
This idea has an elegant instantiation for classical first-order logic which (along with some other simple transformations) yields a useful normal form for any formula. This article is a tour of the normalization process, implemented in Haskell, using a number of Haskell programming tricks. We will begin with just a couple of formal definitions, but quickly move on to �all code, all the time.
   
First, we need the primitive set of terms <math>t</math>, which are either variables <math>x</math> or function symbols <math>f</math> applied to a list of terms (constants are functions of zero arguments).
+
First, we need the primitive set of terms <math>t</math>, which are either variables <math>x</math> or function symbols <math>f</math> applied to a list of terms (constants are functions of zero arguments). <math>t ::= x ~\mid~ f(t_1, \cdots, t_n)
  +
</math>
   
<blockquote>t ::= x f(t 1, , t n)
 
</blockquote>
 
 
Next, we add atomic predicates <math>P</math> over terms, and the logical constructions to combine atomic predicates. Since we are talking about classical logic, many constructs have duals, so they are presented side-by-side. <math>\begin{array}{rclcl} \phi & ::= & P(t_1,
 
Next, we add atomic predicates <math>P</math> over terms, and the logical constructions to combine atomic predicates. Since we are talking about classical logic, many constructs have duals, so they are presented side-by-side. <math>\begin{array}{rclcl} \phi & ::= & P(t_1,
 
\cdots, t_n) \\ & \mid & \neg\phi \\ & \mid & \phi_1 \Rightarrow
 
\cdots, t_n) \\ & \mid & \neg\phi \\ & \mid & \phi_1 \Rightarrow
Line 33: Line 40:
 
<math>implicative~normal~f\!orm ~ ::= ~ \bigwedge \left[\bigwedge t^* \Rightarrow \bigvee t ^*\right]^* </math>
 
<math>implicative~normal~f\!orm ~ ::= ~ \bigwedge \left[\bigwedge t^* \Rightarrow \bigvee t ^*\right]^* </math>
   
The normal form may be very large compared to the input formula, but it is convenient for many purposes, such as using Prolog&rsquo;s resolution procedure or an SMT (Satisfiability Modulo Theories) solver. The following process for normalizing a formula is described by Russell and Norvig Russell2003 in seven steps:
+
The normal form may be very large compared to the input formula, but it is convenient for many purposes, such as using Prolog�s resolution procedure or an SMT (Satisfiability Modulo Theories) solver. The following process for normalizing a formula is described by Russell and Norvig {Russell2003} in seven steps:
 
<ol>
 
<li><p>Eliminate implications.</p>
 
 
<p>Move negations inwards.</p>
 
 
<p>Standardize variable names.</p>
 
 
<p>Eliminate existential quantification, reaching Skolem normal form.</p>
 
 
<p>Eliminate universal quantification, reaching prenex formal form.</p>
 
 
<p>Distribute boolean connectives, reaching conjunctive normal form.</p>
 
 
<p>Gather negated atoms, reaching implicative normal form.</p></li></ol>
 
   
  +
# Eliminate implications.
  +
# Move negations inwards.
  +
# Standardize variable names.
  +
# Eliminate existential quantification, reaching Skolem normal form.
  +
# Eliminate universal quantification, reaching prenex formal form.
  +
# Distribute boolean connectives, reaching conjunctive normal form.
  +
# Gather negated atoms, reaching implicative normal form.
 
Keeping in mind the pattern of systematically simplifying the syntax of a formula, let us consider some Haskell data structures for representing first-order logic.
 
Keeping in mind the pattern of systematically simplifying the syntax of a formula, let us consider some Haskell data structures for representing first-order logic.
   
Line 56: Line 55:
 
Experienced Haskellers may translate the above definitions into the following Haskell data types immediately upon reading them:
 
Experienced Haskellers may translate the above definitions into the following Haskell data types immediately upon reading them:
   
<pre>data Term = Const String [Term]
+
<haskell class="haskell">data Term = Const String [Term]
| Var String</pre>
+
| Var String</haskell>
We will reuse the constructor names from FOL later, though, so this is not part of the code for the demonstration.
+
We will reuse the constructor names from |FOL| later, though, so this is not part of the code for the demonstration.
   
<pre>data FOL = Impl FOL FOL
+
<haskell class="haskell">data FOL = Impl FOL FOL
 
| Atom String [Term] | Not FOL
 
| Atom String [Term] | Not FOL
| TT | FF
+
| TT | FF
 
| Or FOL FOL | And FOL FOL
 
| Or FOL FOL | And FOL FOL
| Exists String FOL | Forall String FOL</pre>
+
| Exists String FOL | Forall String FOL</haskell>
To make things more interesting, let us work with the formula representing the statement &ldquo;If there is a person that eats every food, then there is no food that noone eats.&rdquo;
+
To make things more interesting, let us work with the formula representing the statement �If there is a person that eats every food, then there is no food that noone eats.
   
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
Line 73: Line 72:
 
</math>
 
</math>
   
<pre>foodFact =
+
<haskell class="haskell">foodFact =
(Impl
+
(Impl
(Exists &quot;p&quot;
+
(Exists "p"
(And (Atom &quot;Person&quot; [Var &quot;p&quot;])
+
(And (Atom "Person" [Var "p"])
(Forall &quot;f&quot;
+
(Forall "f"
(Impl (Atom &quot;Food&quot; [Var &quot;f&quot;])
+
(Impl (Atom "Food" [Var "f"])
(Atom &quot;Eats&quot; [Var &quot;p&quot;, Var &quot;f&quot;])))))
+
(Atom "Eats" [Var "p", Var "f"])))))
(Not (Exists &quot;f&quot;
+
(Not (Exists "f"
(And (Atom &quot;Food&quot; [Var &quot;f&quot;])
+
(And (Atom "Food" [Var "f"])
(Not (Exists &quot;p&quot;
+
(Not (Exists "p"
(And (Atom &quot;Person&quot; [Var &quot;p&quot;])
+
(And (Atom "Person" [Var "p"])
(Atom &quot;Eats&quot; [Var &quot;f&quot;]))))))))</pre>
+
(Atom "Eats" [Var "f"]))))))))</haskell>
 
=== Higher-Order Abstract Syntax ===
 
=== Higher-Order Abstract Syntax ===
   
While the above encoding is natural to write down, it has drawbacks for actual work. The first thing to notice is that we are using the String type to represent variables, and may have to carefully manage scoping. But what do variables range over? Terms. And Haskell already has variables that range over the data type Term , so we can re-use Haskell&rsquo;s implementation; this technique is known as higher-order abstract syntax (HOAS).
+
While the above encoding is natural to write down, it has drawbacks for actual work. The first thing to notice is that we are using the |String| type to represent variables, and may have to carefully manage scoping. But what do variables range over? Terms. And Haskell already has variables that range over the data type |Term|, so we can re-use Haskell�s implementation; this technique is known as higher-order abstract syntax (HOAS).
   
<pre>data FOL = Impl FOL FOL
+
<haskell class="haskell">data FOL = Impl FOL FOL
 
| Atom String [Term] | Not FOL
 
| Atom String [Term] | Not FOL
| TT | FF
+
| TT | FF
 
| Or FOL FOL | And FOL FOL
 
| Or FOL FOL | And FOL FOL
| Exists (Term -&gt; FOL) | Forall (Term -&gt; FOL)</pre>
+
| Exists (Term -> FOL) | Forall (Term -> FOL)</haskell>
In a HOAS encoding, the binder of the object language (the quantifiers of first-order logic) are implemented using the binders of the metalanguage (Haskell). For example, where in the previous encoding we would represent <math>\exists x.\, P(x)</math> as Exists &quot;x&quot; (Const &quot;P&quot; [Var &quot;x&quot;]) we now represent it with Exists ( - (Const &quot;P&quot; [x])) . And our example becomes:
+
In a HOAS encoding, the binder of the object language (the quantifiers of first-order logic) are implemented using the binders of the metalanguage (Haskell). For example, where in the previous encoding we would represent <math>\exists x.\, P(x)</math> as |Exists "x" (Const "P" [Var "x"])| we now represent it with |Exists ( -> (Const "P" [x]))|. And our example becomes:
   
<pre>foodFact =
+
<haskell class="haskell">foodFact =
(Impl
+
(Impl
(Exists $ \p -&gt;
+
(Exists $ \p ->
(And (Atom &quot;Person&quot; [p])
+
(And (Atom "Person" [p])
(Forall $ \f -&gt;
+
(Forall $ \f ->
(Impl (Atom &quot;Food&quot; [f])
+
(Impl (Atom "Food" [f])
(Atom &quot;Eats&quot; [p, f])))))
+
(Atom "Eats" [p, f])))))
(Not (Exists $ \f -&gt;
+
(Not (Exists $ \f ->
(And (Atom &quot;Food&quot; [f])
+
(And (Atom "Food" [f])
(Not (Exists $ \p -&gt;
+
(Not (Exists $ \p ->
(And (Atom &quot;Person&quot; [p])
+
(And (Atom "Person" [p])
(Atom &quot;Eats&quot; [f]))))))))</pre>
+
(Atom "Eats" [f]))))))))</haskell>
Since the variables p and f have taken the place of the String variable names, Haskell&rsquo;s binding structure now ensures that we cannot construct a first-order logic formula with unbound variables, unless we use the Var constructor, which is still present because we will need it later. Another important benefit is that the type now expresses that the variables range over the Term data type, while before it was up to us to properly interpret the String variable names.
+
Since the variables |p| and |f| have taken the place of the |String| variable names, Haskell�s binding structure now ensures that we cannot construct a first-order logic formula with unbound variables, unless we use the |Var| constructor, which is still present because we will need it later. Another important benefit is that the type now expresses that the variables range over the |Term| data type, while before it was up to us to properly interpret the |String| variable names.
   
<blockquote>Modify the code of this article so that the Var constructor is not introduced until it is required in stage 5.
+
<blockquote>Modify the code of this article so that the |Var| constructor is not introduced until it is required in stage 5.
 
</blockquote>
 
</blockquote>
 
=== Data Types à la Carte ===
 
=== Data Types à la Carte ===
   
But even using this improved encoding, all our transformations will be of type FOL - FOL . Because this type does not express the structure of the computation very precisely, we must rely on human inspection to ensure that each stage is written correctly. More importantly, we are not making manifest the requirement of certain stages that the prior stages&rsquo; transformations have been performed. For example, our elimination of universal quantification is only a correct transformation when existentials have already been eliminated. A good goal for refining our type structure is to describe our data with types that reflect which connectives may be present.
+
But even using this improved encoding, all our transformations will be of type |FOL -> FOL|. Because this type does not express the structure of the computation very precisely, we must rely on human inspection to ensure that each stage is written correctly. More importantly, we are not making manifest the requirement of certain stages that the prior stages� transformations have been performed. For example, our elimination of universal quantification is only a correct transformation when existentials have already been eliminated. A good goal for refining our type structure is to describe our data with types that reflect which connectives may be present.
   
Swierstra proposes a technique dtalc whereby a variant data type is built up by mixing and matching constructors of different functors using their ''coproduct'' (:+:) , which is the &ldquo;smallest&rdquo; functor containing both of its arguments.
+
Swierstra proposes a technique {dtalc} whereby a variant data type is built up by mixing and matching constructors of different functors using their ''coproduct'' |(:+:)|, which is the �smallest� functor containing both of its arguments.
   
<pre>data (f :+: g) a = Inl (f a) | Inr (g a)
+
<haskell class="haskell">data (f :+: g) a = Inl (f a) | Inr (g a)
 
infixr 6 :+:
 
infixr 6 :+:
   
instance (Functor f, Functor g) =&gt; Functor (f :+: g) where
+
instance (Functor f, Functor g) => Functor (f :+: g) where
 
fmap f (Inl x) = Inl (fmap f x)
 
fmap f (Inl x) = Inl (fmap f x)
fmap f (Inr x) = Inr (fmap f x)</pre>
+
fmap f (Inr x) = Inr (fmap f x)</haskell>
The :+: constructor is like Either but it operates on functors. This difference is crucial &ndash; if f and g represent two constructors that we wish to combine into a larger ''recursive'' data type, then the type parameter a represents the type of their subformulae.
+
The |:+:| constructor is like |Either| but it operates on functors. This difference is crucial if |f| and |g| represent two constructors that we wish to combine into a larger ''recursive'' data type, then the type parameter |a| represents the type of their subformulae.
   
To work conveniently with coproducts, we define a type class : : that implements subtyping by explicitly providing an injection from one of the constructors to the larger coproduct data type. There are some technical aspects to making sure current Haskell implementations can figure out the needed instances of : : , but in this example we need only Swierstra&rsquo;s original subsumption instances, found in Figure fig:Subsumption. For your own use of the technique, discussion on Phil Wadler&rsquo;s blog wadler-dtalc and the Haskell-Cafe mailing list haskell-cafe-dtalc may be helpful.
+
To work conveniently with coproducts, we define a type class |:<:| that implements subtyping by explicitly providing an injection from one of the constructors to the larger coproduct data type. There are some technical aspects to making sure current Haskell implementations can figure out the needed instances of |:<:|, but in this example we need only Swierstra�s original subsumption instances, found in Figure fig:Subsumption. For your own use of the technique, discussion on Phil Wadler�s blog {wadler-dtalc} and the Haskell-Cafe mailing list {haskell-cafe-dtalc} may be helpful.
   
  +
<blockquote><haskell class="haskell">class (Functor sub, Functor sup) => sub :<: sup where
  +
inj :: sub a -> sup a
   
  +
instance Functor f => (:<:) f f where
  +
inj = id
   
  +
instance (Functor f, Functor g) => (:<:) f (f :+: g) where
  +
inj = Inl
  +
  +
instance (Functor f, Functor g, Functor h, (f :<: g))
  +
=> (:<:) f (h :+: g) where
  +
inj = Inr . inj</haskell>
  +
{Subsumption instances} (fig:Subsumption)
  +
</blockquote>
 
If the above seems a bit abstract or confusing, it will become very clear when we put it into practice. Let us immediately do so by encoding the constructors of first-order logic in this modular fashion.
 
If the above seems a bit abstract or confusing, it will become very clear when we put it into practice. Let us immediately do so by encoding the constructors of first-order logic in this modular fashion.
   
<pre>data TT a = TT
+
<haskell class="haskell">data TT a = TT
 
data FF a = FF
 
data FF a = FF
 
data Atom a = Atom String [Term]
 
data Atom a = Atom String [Term]
Line 139: Line 150:
 
data And a = And a a
 
data And a = And a a
 
data Impl a = Impl a a
 
data Impl a = Impl a a
data Exists a = Exists (Term -&gt; a)
+
data Exists a = Exists (Term -> a)
data Forall a = Forall (Term -&gt; a)</pre>
+
data Forall a = Forall (Term -> a)</haskell>
Each constructor is parameterized by a type a of subformulae; TT , FF , and Atom do not have any subformulae so they ignore their parameter. Logical operations such as And have two subformulae. Correspondingly, the And constructor takes two arguments of type a .
+
Each constructor is parameterized by a type |a| of subformulae; |TT|, |FF|, and |Atom| do not have any subformulae so they ignore their parameter. Logical operations such as |And| have two subformulae. Correspondingly, the |And| constructor takes two arguments of type |a|.
   
The compound functor Input is now the specification of which constructors may appear in a first-order logic formula.
+
The compound functor |Input| is now the specification of which constructors may appear in a first-order logic formula.
   
<pre>type Input = TT :+: FF :+: Atom
+
<haskell class="haskell">type Input = TT :+: FF :+: Atom
:+: Not :+: Or :+: And :+: Impl
+
:+: Not :+: Or :+: And :+: Impl
:+: Exists :+: Forall
+
:+: Exists :+: Forall</haskell>
  +
The final step is to �tie the knot� with the following |Formula| data type, which generates a recursive formula over whatever constructors are present in its functor argument |f|.
</pre>
 
The final step is to &ldquo;tie the knot&rdquo; with the following Formula data type, which generates a recursive formula over whatever constructors are present in its functor argument f .
 
   
<pre>data Formula f = In { out :: f (Formula f) } </pre>
+
<haskell class="haskell">data Formula f = In { out :: f (Formula f) }</haskell>
If you have not seen this trick before, that definition may be hard to read and understand. Consider the types of In and out .
+
If you have not seen this trick before, that definition may be hard to read and understand. Consider the types of |In| and |out|.
   
<pre>In :: f (Formula f) -&gt; Formula f
+
<haskell class="haskell">In :: f (Formula f) -> Formula f
out :: Formula f -&gt; f (Formula f)</pre>
+
out :: Formula f -> f (Formula f)</haskell>
Observe that <math>|In . out == out . In == id|</math>. This pair of inverses allows us to &ldquo;roll&rdquo; and &ldquo;unroll&rdquo; one layer of a formula in order to operate on the outermost constructor. Haskell does this same thing when you pattern-match against &ldquo;normal&rdquo; recursive data types. Like Haskell, we want to hide this rolling and unrolling. To hide the rolling, we define some helper constructors, found in Figure fig:FOLboilerplate, that inject a constructor into an arbitrary supertype, and then apply In to yield a Formula .
+
Observe that <math>|In . out == out . In == id|</math>. This pair of inverses allows us to �roll� and �unroll� one layer of a formula in order to operate on the outermost constructor. Haskell does this same thing when you pattern-match against �normal� recursive data types. Like Haskell, we want to hide this rolling and unrolling. To hide the rolling, we define some helper constructors, found in Figure fig:FOLboilerplate, that inject a constructor into an arbitrary supertype, and then apply |In| to yield a |Formula|.
   
To hide the unrolling, we use the fact that a fixpoint of a functor comes with a fold operation, or ''catamorphism'', which we will use to traverse a formula&rsquo;s syntax. The function foldFormula takes as a parameter an ''algebra'' of the functor f . Intuitively, algebra tells us how to fold &ldquo;one layer&rdquo; of a formula, assuming all subformulae have already been processed. The fixpoint then provides the recursive structure of the computation once and for all.
+
To hide the unrolling, we use the fact that a fixpoint of a functor comes with a fold operation, or ''catamorphism'', which we will use to traverse a formula�s syntax. The function |foldFormula| takes as a parameter an ''algebra'' of the functor |f|. Intuitively, |algebra| tells us how to fold �one layer� of a formula, assuming all subformulae have already been processed. The fixpoint then provides the recursive structure of the computation once and for all.
   
<pre>foldFormula :: Functor f =&gt; (f a -&gt; a) -&gt; Formula f -&gt; a
+
<haskell class="haskell">foldFormula :: Functor f => (f a -> a) -> Formula f -> a
foldFormula algebra = algebra . fmap (foldFormula algebra) . out</pre>
+
foldFormula algebra = algebra . fmap (foldFormula algebra) . out</haskell>
We are already reaping some of the benefit of our &ldquo;à la carte&rdquo; technique: The boilerplate Functor instances in Figure fig:FOLboilerplate are not much larger than the code of foldFormula would have been, and they are defined modularly! Unlike a monolithic foldFormula implementation, this one function will work no matter which constructors are present. If the definition of foldFormula is unfamiliar, it is worth imagining a Formula f flowing through the three stages: First, out unrolls the formula one layer, then fmap recursively folds over all the subformulae. Finally, the results of the recursion are combined by algebra .
+
We are already reaping some of the benefit of our �à la carte� technique: The boilerplate |Functor| instances in Figure fig:FOLboilerplate are not much larger than the code of |foldFormula| would have been, and they are defined modularly! Unlike a monolithic |foldFormula| implementation, this one function will work no matter which constructors are present. If the definition of |foldFormula| is unfamiliar, it is worth imagining a |Formula f| flowing through the three stages: First, |out| unrolls the formula one layer, then |fmap| recursively folds over all the subformulae. Finally, the results of the recursion are combined by |algebra|.
   
  +
<blockquote><haskell class="haskell">instance Functor TT where fmap _ _ = TT
  +
instance Functor FF where fmap _ _ = FF
  +
instance Functor Atom where fmap _ (Atom p ts) = Atom p ts
  +
instance Functor Not where fmap f (Not phi) = Not (f phi)
  +
instance Functor Or where fmap f (Or phi1 phi2) = Or (f phi1) (f phi2)
  +
instance Functor And where fmap f (And phi1 phi2) = And (f phi1) (f phi2)
  +
instance Functor Impl where fmap f (Impl phi1 phi2) = Impl (f phi1) (f phi2)
  +
instance Functor Forall where fmap f (Forall phi) = Forall (f . phi)
  +
instance Functor Exists where fmap f (Exists phi) = Exists (f . phi)
   
  +
inject :: (g :<: f) => g (Formula f) -> Formula f
  +
inject = In . inj
   
  +
tt :: (TT :<: f) => Formula f
  +
tt = inject TT
  +
  +
ff :: (FF :<: f) => Formula f
  +
ff = inject FF
  +
  +
atom :: (Atom :<: f) => String -> [Term] -> Formula f
  +
atom p ts = inject (Atom p ts)
  +
  +
not :: (Not :<: f) => Formula f -> Formula f
  +
not = inject . Not
  +
  +
or :: (Or :<: f) => Formula f -> Formula f -> Formula f
  +
or phi1 phi2 = inject (Or phi1 phi2)
  +
  +
and :: (And :<: f) => Formula f -> Formula f -> Formula f
  +
and phi1 phi2 = inject (And phi1 phi2)
  +
  +
impl :: (Impl :<: f) => Formula f -> Formula f -> Formula f
  +
impl phi1 phi2 = inject (Impl phi1 phi2)
  +
  +
forall :: (Forall :<: f) => (Term -> Formula f) -> Formula f
  +
forall = inject . Forall
  +
  +
exists :: (Exists :<: f) => (Term -> Formula f) -> Formula f
  +
exists = inject . Exists</haskell>
  +
{Boilerplate for First-Order Logic Constructors} (fig:FOLboilerplate)
  +
</blockquote>
 
Here is what our running example looks like with this encoding:
 
Here is what our running example looks like with this encoding:
   
<pre>foodFact :: Formula Input
+
<haskell class="haskell">foodFact :: Formula Input
foodFact = (exists $ \p -&gt; atom &quot;Person&quot; [p]
+
foodFact = (exists $ \p -> atom "Person" [p]
`and` (forall $ \f -&gt; atom &quot;Food&quot; [f]
+
`and` (forall $ \f -> atom "Food" [f]
`impl` atom &quot;Eats&quot; [p,f]))
+
`impl` atom "Eats" [p,f]))
 
`impl`
 
`impl`
(not (exists $ \f -&gt; atom &quot;Food&quot; [f]
+
(not (exists $ \f -> atom "Food" [f]
`and` (not (exists $ \p -&gt; atom &quot;Person&quot; [p]
+
`and` (not (exists $ \p -> atom "Person" [p]
`and` atom &quot;Eats&quot; [p,f]))))</pre>
+
`and` atom "Eats" [p,f]))))</haskell>
A TeX pretty-printer is included as an appendix to this article. To make things readable, though, I&rsquo;ll doctor its output into a nice table, and remove extraneous parentheses. But I won&rsquo;t rewrite the variable names, since variables and binding are a key aspect of managing formulae. By convention, the printer uses <math>c</math> for existentially quantified variables and <math>x</math> for universally quantified variables.
+
A TeX pretty-printer is included as an appendix to this article. To make things readable, though, I�ll doctor its output into a nice table, and remove extraneous parentheses. But I won�t rewrite the variable names, since variables and binding are a key aspect of managing formulae. By convention, the printer uses <math>c</math> for existentially quantified variables and <math>x</math> for universally quantified variables.
   
<pre>*Main&gt; texprint foodFact</pre>
+
<haskell>*Main> texprint foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& (\exists c_1.\, Person(c_1) \wedge \forall x_2.\, Food(x_2) \Rightarrow Eats(c_1, x_2)) \\
 
& (\exists c_1.\, Person(c_1) \wedge \forall x_2.\, Food(x_2) \Rightarrow Eats(c_1, x_2)) \\
Line 185: Line 234:
 
</math>
 
</math>
   
== Stage 1 &ndash; Eliminate Implications ==
+
== Stage 1 Eliminate Implications ==
   
The first transformation is an easy one, in which we &ldquo;desugar&rdquo; <math>\phi_1 \Rightarrow \phi_2</math> into <math>\neg \phi_1 \vee \phi_2</math>. The high-level overview is given by the type and body of elimImp .
+
The first transformation is an easy one, in which we �desugar� <math>\phi_1 \Rightarrow \phi_2</math> into <math>\neg \phi_1 \vee \phi_2</math>. The high-level overview is given by the type and body of |elimImp|.
   
<pre>type Stage1 = TT :+: FF :+: Atom :+: Not :+: Or :+: And :+: Exists :+: Forall
+
<haskell class="haskell">type Stage1 = TT :+: FF :+: Atom :+: Not :+: Or :+: And :+: Exists :+: Forall
   
elimImp :: Formula Input -&gt; Formula Stage1
+
elimImp :: Formula Input -> Formula Stage1
elimImp = foldFormula elimImpAlg</pre>
+
elimImp = foldFormula elimImpAlg</haskell>
We take a formula containing all the constructors of first-order logic, and return a formula built without use of Impl . The way that elimImp does this is by folding the algebras elimImpAlg for each constructor over the recursive structure of a formula.
+
We take a formula containing all the constructors of first-order logic, and return a formula built without use of |Impl|. The way that |elimImp| does this is by folding the algebras |elimImpAlg| for each constructor over the recursive structure of a formula.
   
The function elimImpAlg we provide by making each constructor an instance of the ElimImp type class. This class specifies for a given constructor how to eliminate implications &ndash; for most constructors this is just the identity function, though we must rebuild an identical term to alter its type. Perhaps there is a way to use generic programming to eliminate the uninteresting cases.
+
The function |elimImpAlg| we provide by making each constructor an instance of the |ElimImp| type class. This class specifies for a given constructor how to eliminate implications for most constructors this is just the identity function, though we must rebuild an identical term to alter its type. Perhaps there is a way to use generic programming to eliminate the uninteresting cases.
   
<pre>class Functor f =&gt; ElimImp f where
+
<haskell class="haskell">class Functor f => ElimImp f where
elimImpAlg :: f (Formula Stage1) -&gt; Formula Stage1
+
elimImpAlg :: f (Formula Stage1) -> Formula Stage1
   
 
instance ElimImp Impl where elimImpAlg (Impl phi1 phi2) = (not phi1) `or` phi2
 
instance ElimImp Impl where elimImpAlg (Impl phi1 phi2) = (not phi1) `or` phi2
Line 209: Line 258:
 
instance ElimImp And where elimImpAlg (And phi1 phi2) = phi1 `and` phi2
 
instance ElimImp And where elimImpAlg (And phi1 phi2) = phi1 `and` phi2
 
instance ElimImp Exists where elimImpAlg (Exists phi) = exists phi
 
instance ElimImp Exists where elimImpAlg (Exists phi) = exists phi
instance ElimImp Forall where elimImpAlg (Forall phi) = forall phi</pre>
+
instance ElimImp Forall where elimImpAlg (Forall phi) = forall phi</haskell>
We extend ElimImp in the natural way over coproducts, so that whenever all our constructors are members of the type class, then their coproduct is as well.
+
We extend |ElimImp| in the natural way over coproducts, so that whenever all our constructors are members of the type class, then their coproduct is as well.
   
<pre>instance (ElimImp f, ElimImp g) =&gt; ElimImp (f :+: g) where
+
<haskell class="haskell">instance (ElimImp f, ElimImp g) => ElimImp (f :+: g) where
 
elimImpAlg (Inr phi) = elimImpAlg phi
 
elimImpAlg (Inr phi) = elimImpAlg phi
elimImpAlg (Inl phi) = elimImpAlg phi</pre>
+
elimImpAlg (Inl phi) = elimImpAlg phi</haskell>
 
Our running example is now
 
Our running example is now
   
<pre>*Main&gt; texprint . elimImp $ foodFact</pre>
+
<haskell>*Main> texprint . elimImp $ foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& \neg (\exists c_1.\, Person(c_1) \wedge \forall x_2.\, \neg Food(x_2) \vee Eats(c_1, x_2)) \\
 
& \neg (\exists c_1.\, Person(c_1) \wedge \forall x_2.\, \neg Food(x_2) \vee Eats(c_1, x_2)) \\
Line 224: Line 273:
 
</math>
 
</math>
   
<blockquote>Design a solution where only the Impl case of elimImpAlg needs to be written.
+
<blockquote>Design a solution where only the |Impl| case of |elimImpAlg| needs to be written.
 
</blockquote>
 
</blockquote>
== Stage 2 &ndash; Move Negation Inwards ==
+
== Stage 2 Move Negation Inwards ==
   
Now that implications are gone, we are left with entirely symmetrical constructions, and can always push negations in or out using duality:
+
Now that implications are gone, we are left with entirely symmetrical constructions, and can always push negations in or out using duality: <math>\neg(\neg \phi) \Leftrightarrow \phi \\
  +
\neg(\phi_1 \wedge \phi_2) \Leftrightarrow \neg\phi_1 \vee \neg\phi_2 \\
  +
\neg(\phi_1 \vee \phi_2) \Leftrightarrow \neg\phi_1 \wedge \neg\phi_2 \\
  +
\neg(\exists x.\, \phi) \Leftrightarrow \forall x.\, \neg\phi \\
  +
\neg(\forall x.\, \phi) \Leftrightarrow \exists x.\, \neg\phi
  +
</math>
   
<blockquote>( ) <br />
 
( 1 2) 1 2 <br />
 
( 1 2) 1 2 <br />
 
( x. ) x. <br />
 
( x. ) x.
 
</blockquote>
 
 
Our eventual goal is to move negation all the way inward so it is only applied to atomic predicates. To express this structure in our types, we define a new constructor for negated atomic predicates as well as the type for the output of Stage 2:
 
Our eventual goal is to move negation all the way inward so it is only applied to atomic predicates. To express this structure in our types, we define a new constructor for negated atomic predicates as well as the type for the output of Stage 2:
   
<pre>data NAtom a = NAtom String [Term]
+
<haskell class="haskell">data NAtom a = NAtom String [Term]
   
 
instance Functor NAtom where fmap f (NAtom p ts) = NAtom p ts
 
instance Functor NAtom where fmap f (NAtom p ts) = NAtom p ts
   
natom :: (NAtom :&lt;: f) =&gt; String -&gt; [Term] -&gt; Formula f
+
natom :: (NAtom :<: f) => String -> [Term] -> Formula f
 
natom p ts = inject (NAtom p ts)
 
natom p ts = inject (NAtom p ts)
   
type Stage2 = TT :+: FF :+: Atom
+
type Stage2 = TT :+: FF :+: Atom
:+: NAtom
+
:+: NAtom
:+: Or :+: And
+
:+: Or :+: And
:+: Exists :+: Forall</pre>
+
:+: Exists :+: Forall</haskell>
 
One could imagine implementing duality with a multi-parameter type class that records exactly the dual of each constructor, as
 
One could imagine implementing duality with a multi-parameter type class that records exactly the dual of each constructor, as
   
<pre>class (Functor f, Functor g) =&gt; Dual f g where
+
<haskell class="haskell">class (Functor f, Functor g) => Dual f g where
dual :: f a -&gt; g a</pre>
+
dual :: f a -> g a</haskell>
 
Unfortunately, this leads to a situation where our subtyping must use the commutativity of coproducts, which it is incapable of doing as written. For this article, let us just define an algebra to dualize a whole formula at a time.
 
Unfortunately, this leads to a situation where our subtyping must use the commutativity of coproducts, which it is incapable of doing as written. For this article, let us just define an algebra to dualize a whole formula at a time.
   
<pre>dualize :: Formula Stage2 -&gt; Formula Stage2
+
<haskell class="haskell">dualize :: Formula Stage2 -> Formula Stage2
 
dualize = foldFormula dualAlg
 
dualize = foldFormula dualAlg
   
class Functor f =&gt; Dualize f where
+
class Functor f => Dualize f where
dualAlg :: f (Formula Stage2) -&gt; Formula Stage2
+
dualAlg :: f (Formula Stage2) -> Formula Stage2
   
 
instance Dualize TT where dualAlg TT = ff
 
instance Dualize TT where dualAlg TT = ff
Line 266: Line 314:
 
instance Dualize NAtom where dualAlg (NAtom p ts) = atom p ts
 
instance Dualize NAtom where dualAlg (NAtom p ts) = atom p ts
 
instance Dualize Or where dualAlg (Or phi1 phi2) = phi1 `and` phi2
 
instance Dualize Or where dualAlg (Or phi1 phi2) = phi1 `and` phi2
instance Dualize And where dualAlg (And phi1 phi2) = phi1 `or` phi2
+
instance Dualize And where dualAlg (And phi1 phi2) = phi1 `or` phi2
 
instance Dualize Exists where dualAlg (Exists phi) = forall phi
 
instance Dualize Exists where dualAlg (Exists phi) = forall phi
 
instance Dualize Forall where dualAlg (Forall phi) = exists phi
 
instance Dualize Forall where dualAlg (Forall phi) = exists phi
   
instance (Dualize f, Dualize g) =&gt; Dualize (f :+: g) where
+
instance (Dualize f, Dualize g) => Dualize (f :+: g) where
 
dualAlg (Inl phi) = dualAlg phi
 
dualAlg (Inl phi) = dualAlg phi
dualAlg (Inr phi) = dualAlg phi</pre>
+
dualAlg (Inr phi) = dualAlg phi</haskell>
Now perhaps the pattern of these transformations is becoming clear. It is remarkably painless, involving just a little type class syntax as overhead, to write these functor algebras. The definition of pushNotInwards is another straightforward fold, with a helper type class PushNot . I&rsquo;ve separated the instance for Not since it is the only one that does anything.
+
Now perhaps the pattern of these transformations is becoming clear. It is remarkably painless, involving just a little type class syntax as overhead, to write these functor algebras. The definition of |pushNotInwards| is another straightforward fold, with a helper type class |PushNot|. I�ve separated the instance for |Not| since it is the only one that does anything.
   
<pre>class Functor f =&gt; PushNot f where
+
<haskell class="haskell">class Functor f => PushNot f where
pushNotAlg :: f (Formula Stage2) -&gt; Formula Stage2
+
pushNotAlg :: f (Formula Stage2) -> Formula Stage2
   
 
instance PushNot Not where pushNotAlg (Not phi) = dualize phi
 
instance PushNot Not where pushNotAlg (Not phi) = dualize phi
Line 288: Line 336:
 
instance PushNot Forall where pushNotAlg (Forall phi) = forall phi
 
instance PushNot Forall where pushNotAlg (Forall phi) = forall phi
   
instance (PushNot f, PushNot g) =&gt; PushNot (f :+: g) where
+
instance (PushNot f, PushNot g) => PushNot (f :+: g) where
 
pushNotAlg (Inr phi) = pushNotAlg phi
 
pushNotAlg (Inr phi) = pushNotAlg phi
pushNotAlg (Inl phi) = pushNotAlg phi</pre>
+
pushNotAlg (Inl phi) = pushNotAlg phi</haskell>
All we have to do now is define a fold that calls pushNotAlg .
+
All we have to do now is define a fold that calls |pushNotAlg|.
   
<pre>pushNotInwards :: Formula Stage1 -&gt; Formula Stage2
+
<haskell class="haskell">pushNotInwards :: Formula Stage1 -> Formula Stage2
pushNotInwards = foldFormula pushNotAlg</pre>
+
pushNotInwards = foldFormula pushNotAlg</haskell>
 
Our running example now becomes:
 
Our running example now becomes:
   
<pre>*Main&gt; texprint . pushNotInwards . elimImp $ foodFact</pre>
+
<haskell>*Main> texprint . pushNotInwards . elimImp $ foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& (\forall x_1.\, \neg Person(x_1) \vee \exists c_2.\, Food(c_2) \wedge \neg Eats(x_1, c_2)) \\
 
& (\forall x_1.\, \neg Person(x_1) \vee \exists c_2.\, Food(c_2) \wedge \neg Eats(x_1, c_2)) \\
Line 304: Line 352:
 
</math>
 
</math>
   
<blockquote>Instead of the NAtom constructor, define the composition of two functors f O g and re-write Stage2 = TT :+: FF :+: Atom :+: (Not O Atom) :+: Or :+: And :+: Exists :+: Forall
+
<blockquote>Instead of the |NAtom| constructor, define the composition of two functors |f `O` g| and re-write |Stage2 = TT :+: FF :+: Atom :+: (Not `O` Atom) :+: Or :+: And :+: Exists :+: Forall|
 
</blockquote>
 
</blockquote>
<blockquote>Encode a form of subtyping that can reason using commutativity of coproducts, and rewrite the Dualize algebra using a two-parameter Dual type class as described above.
+
<blockquote>Encode a form of subtyping that can reason using commutativity of coproducts, and rewrite the |Dualize| algebra using a two-parameter |Dual| type class as described above.
 
</blockquote>
 
</blockquote>
== Stage 3 &ndash; Standardize variable names ==
+
== Stage 3 Standardize variable names ==
   
To &ldquo;standardize&rdquo; variable names means to choose nonconflicting names for all the variables in a formula. Since we are using higher-order abstract syntax, Haskell is handling name conflicts for now. We can immediately jump to stage 4!
+
To �standardize� variable names means to choose nonconflicting names for all the variables in a formula. Since we are using higher-order abstract syntax, Haskell is handling name conflicts for now. We can immediately jump to stage 4!
   
== Stage 4 &ndash; Skolemization ==
+
== Stage 4 Skolemization ==
   
It is interesting to arrive at the definition of Skolemization via the Curry-Howard correspondence. You may be familiar with the idea that terms of type a - b are proofs of the proposition &ldquo;<math>a</math> implies <math>b</math>&rdquo;, assuming a and b are interpreted as propositions as well. This rests on a notion that a proof of a - b must be some process that can take a proof of a and generate a proof of b , a very computational notion. Rephrasing the above, a function of type a - b is a guarantee that ''for all'' elements of type a , ''there exists'' a corresponding element of type b . So a function type expresses an alternation of a universal quantifier with an existential. We will use this to replace all the existential quantifiers with freshly-generated functions. We can of course, pass a unit type to a function, or a tuple of many arguments, to have as many universal quantifiers as we like.
+
It is interesting to arrive at the definition of Skolemization via the Curry-Howard correspondence. You may be familiar with the idea that terms of type |a -> b| are proofs of the proposition <math>a</math> implies <math>b</math>, assuming |a| and |b| are interpreted as propositions as well. This rests on a notion that a proof of |a -> b| must be some process that can take a proof of |a| and generate a proof of |b|, a very computational notion. Rephrasing the above, a function of type |a -> b| is a guarantee that ''for all'' elements of type |a|, ''there exists'' a corresponding element of type |b|. So a function type expresses an alternation of a universal quantifier with an existential. We will use this to replace all the existential quantifiers with freshly-generated functions. We can of course, pass a unit type to a function, or a tuple of many arguments, to have as many universal quantifiers as we like.
   
 
Suppose we have <math>\forall x.\, \forall y.\, \exists z.\, P(x,y,z)</math>, then in general there may be many choices for <math>z</math>, given a particular <math>x</math> and <math>y</math>. By the axiom of choice, we can create a function <math>f</math> that associates each <math>\langle x,y \rangle</math> pair with a corresponding <math>z</math> arbitrarily, and then rewrite the above formula as <math>\forall x.\,
 
Suppose we have <math>\forall x.\, \forall y.\, \exists z.\, P(x,y,z)</math>, then in general there may be many choices for <math>z</math>, given a particular <math>x</math> and <math>y</math>. By the axiom of choice, we can create a function <math>f</math> that associates each <math>\langle x,y \rangle</math> pair with a corresponding <math>z</math> arbitrarily, and then rewrite the above formula as <math>\forall x.\,
P(x, y, f(x,y))</math>. Technically, this formula is only equisatisfiable, but by convention I&rsquo;m assuming constants to be existentially quantified.
+
P(x, y, f(x,y))</math>. Technically, this formula is only equisatisfiable, but by convention I�m assuming constants to be existentially quantified.
   
 
So we need to traverse the syntax tree gathering free variables and replacing existentially quantified variables with functions of a fresh name. Since we are eliminating a binding construct, we now need to reason about fresh unique names.
 
So we need to traverse the syntax tree gathering free variables and replacing existentially quantified variables with functions of a fresh name. Since we are eliminating a binding construct, we now need to reason about fresh unique names.
   
Today&rsquo;s formulas are small, so let us use a naïve and wasteful splittable unique identifier supply. Our supply is an infinite binary tree, where moving left prepends a 0 to the bit representation of the current counter, while moving right prepends a 1 . Hence, the left and right subtrees are both infinite, nonoverlapping supplies of identifiers. The code for our unique identifier supplies is in Figure fig:unq.
+
Today�s formulas are small, so let us use a naïve and wasteful splittable unique identifier supply. Our supply is an infinite binary tree, where moving left prepends a |0| to the bit representation of the current counter, while moving right prepends a |1|. Hence, the left and right subtrees are both infinite, nonoverlapping supplies of identifiers. The code for our unique identifier supplies is in Figure fig:unq.
   
Launchbury and Peyton-Jones launchbury95state have discussed how to use the ST monad to implement a much more sophisticated and space-efficient version of the same idea.
+
Launchbury and Peyton-Jones {launchbury95state} have discussed how to use the |ST| monad to implement a much more sophisticated and space-efficient version of the same idea.
   
  +
<blockquote>
  +
<haskell class="haskell">type Unique = Int
  +
data UniqueSupply = UniqueSupply Unique UniqueSupply UniqueSupply
   
  +
initialUniqueSupply :: UniqueSupply
  +
initialUniqueSupply = genSupply 1
  +
where genSupply n = UniqueSupply n (genSupply (2*n))
  +
(genSupply (2*n+1))
   
  +
splitUniqueSupply :: UniqueSupply -> (UniqueSupply, UniqueSupply)
The helper algebra for Skolemization is more complex than before because a Formula Stage2 is not directly transformed into Formula Stage4 but into a function from its free variables to a new formula. On top of that, the final computation takes place in the Supply monad because of the need to generate fresh names for Skolem functions. Correspondingly, we choose the return type of the algebra to be [Term] - Supply (Formula Stage4) . Thankfully, many instances are just boilerplate.
 
  +
splitUniqueSupply (UniqueSupply _ l r) = (l,r)
   
  +
getUnique :: UniqueSupply -> (Unique, UniqueSupply)
<pre>type Stage4 = TT :+: FF :+: Atom :+: NAtom :+: Or :+: And :+: Forall
 
  +
getUnique (UniqueSupply n l r) = (n,l)
   
  +
type Supply a = State UniqueSupply a
class Functor f =&gt; Skolem f where
 
skolemAlg :: f ([Term] -&gt; Supply (Formula Stage4))
 
-&gt; [Term] -&gt; Supply (Formula Stage4)
 
   
  +
fresh :: Supply Int
instance Skolem TT where
 
  +
fresh = do supply <- get
  +
let (uniq,rest) = getUnique supply
  +
put rest
  +
return uniq
  +
  +
freshes :: Supply UniqueSupply
  +
freshes = do supply <- get
  +
let (l,r) = splitUniqueSupply supply
  +
put r
  +
return l</haskell>
  +
{Unique supplies} (fig:unq)
  +
</blockquote>
  +
The helper algebra for Skolemization is more complex than before because a |Formula Stage2| is not directly transformed into |Formula Stage4| but into a function from its free variables to a new formula. On top of that, the final computation takes place in the |Supply| monad because of the need to generate fresh names for Skolem functions. Correspondingly, we choose the return type of the algebra to be |[Term] -> Supply (Formula Stage4)|. Thankfully, many instances are just boilerplate.
  +
  +
<haskell class="haskell">type Stage4 = TT :+: FF :+: Atom :+: NAtom :+: Or :+: And :+: Forall
  +
  +
class Functor f => Skolem f where
  +
skolemAlg :: f ([Term] -> Supply (Formula Stage4))
  +
-> [Term] -> Supply (Formula Stage4)
  +
  +
instance Skolem TT where
 
skolemAlg TT xs = return tt
 
skolemAlg TT xs = return tt
instance Skolem FF where
+
instance Skolem FF where
 
skolemAlg FF xs = return ff
 
skolemAlg FF xs = return ff
instance Skolem Atom where
+
instance Skolem Atom where
 
skolemAlg (Atom p ts) xs = return (atom p ts)
 
skolemAlg (Atom p ts) xs = return (atom p ts)
instance Skolem NAtom where
+
instance Skolem NAtom where
 
skolemAlg (NAtom p ts) xs = return (natom p ts)
 
skolemAlg (NAtom p ts) xs = return (natom p ts)
instance Skolem Or where
+
instance Skolem Or where
 
skolemAlg (Or phi1 phi2) xs = liftM2 or (phi1 xs) (phi2 xs)
 
skolemAlg (Or phi1 phi2) xs = liftM2 or (phi1 xs) (phi2 xs)
instance Skolem And where
+
instance Skolem And where
 
skolemAlg (And phi1 phi2) xs = liftM2 and (phi1 xs) (phi2 xs)
 
skolemAlg (And phi1 phi2) xs = liftM2 and (phi1 xs) (phi2 xs)
   
instance (Skolem f, Skolem g) =&gt; Skolem (f :+: g) where
+
instance (Skolem f, Skolem g) => Skolem (f :+: g) where
 
skolemAlg (Inr phi) = skolemAlg phi
 
skolemAlg (Inr phi) = skolemAlg phi
skolemAlg (Inl phi) = skolemAlg phi</pre>
+
skolemAlg (Inl phi) = skolemAlg phi</haskell>
In the case for a universal quantifier <math>\forall x.\, \phi</math>, any existentials contained within <math>\phi</math> are parameterized by the variable <math>x</math>, so we add <math>x</math> to the list of free variables and Skolemize the body <math>\phi</math>. Implementing this in Haskell, the algebra instance must be a function from Forall (Term - [Term] - Supply (Formula Stage4)) to [Term] - Supply (Forall (Term - Formula Stage4)) , which involves some juggling of the unique supply.
+
In the case for a universal quantifier <math>\forall x.\, \phi</math>, any existentials contained within <math>\phi</math> are parameterized by the variable <math>x</math>, so we add <math>x</math> to the list of free variables and Skolemize the body <math>\phi</math>. Implementing this in Haskell, the algebra instance must be a function from |Forall (Term -> [Term] -> Supply (Formula Stage4))| to |[Term] -> Supply (Forall (Term -> Formula Stage4))|, which involves some juggling of the unique supply.
   
<pre>instance Skolem Forall where
+
<haskell class="haskell">instance Skolem Forall where
skolemAlg (Forall phi) xs =
+
skolemAlg (Forall phi) xs =
do supply &lt;- freshes
+
do supply <- freshes
return (forall $ \x -&gt; evalState (phi x (x:xs)) supply)</pre>
+
return (forall $ \x -> evalState (phi x (x:xs)) supply)</haskell>
From the recursive result phi , we need to construct a new body for the forall constructor that has a ''pure'' body: It must not run in the Supply monad. Yet the body must contain only names that do not conflict with those used in the rest of this fold. This is why we need a moderately complex UniqueSupply data structure: To break off a disjoint-yet-infinite supply for use by evalState in the body of a forall , restoring purity to the body by running the Supply computation to completion.
+
From the recursive result |phi|, we need to construct a new body for the |forall| constructor that has a ''pure'' body: It must not run in the |Supply| monad. Yet the body must contain only names that do not conflict with those used in the rest of this fold. This is why we need a moderately complex |UniqueSupply| data structure: To break off a disjoint-yet-infinite supply for use by |evalState| in the body of a |forall|, restoring purity to the body by running the |Supply| computation to completion.
   
Finally, the key instance for existentials is actually quite simple &ndash; just generate a fresh name and apply the Skolem function to all the arguments xs . The application phi (Const name xs) is how we express replacement of the existentially bound term with Const name xs with higher-order abstract syntax.
+
Finally, the key instance for existentials is actually quite simple just generate a fresh name and apply the Skolem function to all the arguments |xs|. The application |phi (Const name xs)| is how we express replacement of the existentially bound term with |Const name xs| with higher-order abstract syntax.
   
<pre>instance Skolem Exists where
+
<haskell class="haskell">instance Skolem Exists where
skolemAlg (Exists phi) xs =
+
skolemAlg (Exists phi) xs =
do uniq &lt;- fresh
+
do uniq <- fresh
phi (Const (&quot;Skol&quot; ++ show uniq) xs) xs</pre>
+
phi (Const ("Skol" ++ show uniq) xs) xs</haskell>
After folding the Skolemization algebra over a formula, Since we are assuming the formula is closed, we apply the result of folding skolemAlg to the empty list of free variables. Then the resulting Supply (Formula Stage4) computation is run to completion starting with the initialUniqueSupply .
+
After folding the Skolemization algebra over a formula, Since we are assuming the formula is closed, we apply the result of folding |skolemAlg| to the empty list of free variables. Then the resulting |Supply (Formula Stage4)| computation is run to completion starting with the |initialUniqueSupply|.
   
<pre>skolemize :: Formula Stage2 -&gt; Formula Stage4
+
<haskell class="haskell">skolemize :: Formula Stage2 -> Formula Stage4
 
skolemize formula = evalState (foldResult []) initialUniqueSupply
 
skolemize formula = evalState (foldResult []) initialUniqueSupply
where foldResult :: [Term] -&gt; Supply (Formula Stage4)
+
where foldResult :: [Term] -> Supply (Formula Stage4)
foldResult = foldFormula skolemAlg formula</pre>
+
foldResult = foldFormula skolemAlg formula</haskell>
 
And the output is starting to get interesting:
 
And the output is starting to get interesting:
   
<pre>*Main&gt; texprint . skolemize . pushNotInwards . elimImp $ foodFact</pre>
+
<haskell>*Main> texprint . skolemize . pushNotInwards . elimImp $ foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& (\forall x_1.\, \neg Person(x_1) \vee Food(Skol_2(x_1)) \wedge \neg Eats(x_1, Skol_2(x_1))) \\
 
& (\forall x_1.\, \neg Person(x_1) \vee Food(Skol_2(x_1)) \wedge \neg Eats(x_1, Skol_2(x_1))) \\
Line 380: Line 456:
 
</math>
 
</math>
   
In the first line, <math>Skol_2</math> maps a person to a food they don&rsquo;t eat. In the second line, <math>Skol_6</math> maps a food to a person who doesn&rsquo;t eat it.
+
In the first line, <math>Skol_2</math> maps a person to a food they don�t eat. In the second line, <math>Skol_6</math> maps a food to a person who doesn�t eat it.
   
<blockquote>In the definition of skolemAlg , we use liftM2 to thread the Supply monad through the boring cases, but the (- ) [Term] monad is managed manually. Augment the (- ) [Term] monad to handle the Forall and Exists cases, and then combine this monad with Supply using either StateT or the monad coproduct monad-coproduct .
+
<blockquote>In the definition of |skolemAlg|, we use |liftM2| to thread the |Supply| monad through the boring cases, but the |(->) [Term]| monad is managed manually. Augment the |(->) [Term]| monad to handle the |Forall| and |Exists| cases, and then combine this monad with |Supply| using either |StateT| or the monad coproduct {monad-coproduct}.
 
</blockquote>
 
</blockquote>
== Stage 5 &ndash; Prenex Normal Form ==
+
== Stage 5 Prenex Normal Form ==
   
Now that all the existentials have been eliminated, we can also eliminate the universally quantified variables. A formula is in ''prenex normal form'' when all the quantifiers have been pushed to the outside of other connectives. We have already removed existential quantifiers, so we are dealing only with universal quantifiers. As long as variable names don&rsquo;t conflict, we can freely push them as far out as we like and commute all binding sites. By convention, free variables are universally quantifed, so a formula is valid if and only if the body of its prenex form is valid. Though this may sound technical, all we have to do to eliminate universal quantification is choose fresh names for all the variables and forget about their binding sites.
+
Now that all the existentials have been eliminated, we can also eliminate the universally quantified variables. A formula is in ''prenex normal form'' when all the quantifiers have been pushed to the outside of other connectives. We have already removed existential quantifiers, so we are dealing only with universal quantifiers. As long as variable names don�t conflict, we can freely push them as far out as we like and commute all binding sites. By convention, free variables are universally quantifed, so a formula is valid if and only if the body of its prenex form is valid. Though this may sound technical, all we have to do to eliminate universal quantification is choose fresh names for all the variables and forget about their binding sites.
   
<pre>type Stage5 = TT :+: FF :+: Atom :+: NAtom :+: Or :+: And
+
<haskell class="haskell">type Stage5 = TT :+: FF :+: Atom :+: NAtom :+: Or :+: And
   
prenex :: Formula Stage4 -&gt; Formula Stage5
+
prenex :: Formula Stage4 -> Formula Stage5
prenex formula = evalState (foldFormula prenexAlg formula)
+
prenex formula = evalState (foldFormula prenexAlg formula)
 
initialUniqueSupply
 
initialUniqueSupply
   
class Functor f =&gt; Prenex f where
+
class Functor f => Prenex f where
prenexAlg :: f (Supply (Formula Stage5)) -&gt; Supply (Formula Stage5)
+
prenexAlg :: f (Supply (Formula Stage5)) -> Supply (Formula Stage5)
   
instance Prenex Forall where
+
instance Prenex Forall where
prenexAlg (Forall phi) = do uniq &lt;- fresh
+
prenexAlg (Forall phi) = do uniq <- fresh
phi (Var (&quot;x&quot; ++ show uniq))
+
phi (Var ("x" ++ show uniq))
   
instance Prenex TT where
+
instance Prenex TT where
 
prenexAlg TT = return tt
 
prenexAlg TT = return tt
instance Prenex FF where
+
instance Prenex FF where
 
prenexAlg FF = return ff
 
prenexAlg FF = return ff
instance Prenex Atom where
+
instance Prenex Atom where
 
prenexAlg (Atom p ts) = return (atom p ts)
 
prenexAlg (Atom p ts) = return (atom p ts)
instance Prenex NAtom where
+
instance Prenex NAtom where
 
prenexAlg (NAtom p ts) = return (natom p ts)
 
prenexAlg (NAtom p ts) = return (natom p ts)
instance Prenex Or where
+
instance Prenex Or where
 
prenexAlg (Or phi1 phi2) = liftM2 or phi1 phi2
 
prenexAlg (Or phi1 phi2) = liftM2 or phi1 phi2
instance Prenex And where
+
instance Prenex And where
 
prenexAlg (And phi1 phi2) = liftM2 and phi1 phi2
 
prenexAlg (And phi1 phi2) = liftM2 and phi1 phi2
   
instance (Prenex f, Prenex g) =&gt; Prenex (f :+: g) where
+
instance (Prenex f, Prenex g) => Prenex (f :+: g) where
 
prenexAlg (Inl phi) = prenexAlg phi
 
prenexAlg (Inl phi) = prenexAlg phi
prenexAlg (Inr phi) = prenexAlg phi</pre>
+
prenexAlg (Inr phi) = prenexAlg phi</haskell>
 
Since prenex is just forgetting the binders, our example is mostly unchanged.
 
Since prenex is just forgetting the binders, our example is mostly unchanged.
   
<pre>*Main&gt; texprint . prenex . skolemize . pushNotInwards
+
<haskell>*Main> texprint . prenex . skolemize . pushNotInwards
. elimImp $ foodFact</pre>
+
. elimImp $ foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& (\neg Person(x_1) \vee Food(Skol_2(x_1)) \wedge \neg Eats(x_1, Skol_2(x_1))) \\
 
& (\neg Person(x_1) \vee Food(Skol_2(x_1)) \wedge \neg Eats(x_1, Skol_2(x_1))) \\
Line 427: Line 503:
 
</math>
 
</math>
   
== Stage 6 &ndash; Conjunctive Normal Form ==
+
== Stage 6 Conjunctive Normal Form ==
   
 
Now all we have left is possibly-negated atomic predicates connected by <math>\wedge</math> and <math>\vee</math>. This second-to-last stage distributes these over each other to reach a canonical form with all the conjunctions at the outer layer, and all the disjunctions in the inner layer.
 
Now all we have left is possibly-negated atomic predicates connected by <math>\wedge</math> and <math>\vee</math>. This second-to-last stage distributes these over each other to reach a canonical form with all the conjunctions at the outer layer, and all the disjunctions in the inner layer.
   
At this point, we no longer have a recursive type for formulas, so there&rsquo;s not too much point to re-using the old constructors.
+
At this point, we no longer have a recursive type for formulas, so there�s not too much point to re-using the old constructors.
   
<pre>type Literal = (Atom :+: NAtom) ()
+
<haskell class="haskell">type Literal = (Atom :+: NAtom) ()
 
type Clause = [Literal] -- implicit disjunction
 
type Clause = [Literal] -- implicit disjunction
 
type CNF = [Clause] -- implicit conjunction
 
type CNF = [Clause] -- implicit conjunction
   
(\/) :: Clause -&gt; Clause -&gt; Clause
+
(\/) :: Clause -> Clause -> Clause
 
(\/) = (++)
 
(\/) = (++)
   
(/\) :: CNF -&gt; CNF -&gt; CNF
+
(/\) :: CNF -> CNF -> CNF
(/\) = (++)</pre>
+
(/\) = (++)</haskell>
<pre>cnf :: Formula Stage5 -&gt; CNF
+
<haskell class="haskell">cnf :: Formula Stage5 -> CNF
 
cnf = foldFormula cnfAlg
 
cnf = foldFormula cnfAlg
   
class Functor f =&gt; ToCNF f where
+
class Functor f => ToCNF f where
cnfAlg :: f CNF -&gt; CNF
+
cnfAlg :: f CNF -> CNF
   
 
instance ToCNF TT where
 
instance ToCNF TT where
Line 456: Line 532:
 
instance ToCNF NAtom where
 
instance ToCNF NAtom where
 
cnfAlg (NAtom p ts) = [[inj (NAtom p ts)]]
 
cnfAlg (NAtom p ts) = [[inj (NAtom p ts)]]
instance ToCNF And where
+
instance ToCNF And where
 
cnfAlg (And phi1 phi2) = phi1 /\ phi2
 
cnfAlg (And phi1 phi2) = phi1 /\ phi2
instance ToCNF Or where
+
instance ToCNF Or where
cnfAlg (Or phi1 phi2) = [ a \/ b | a &lt;- phi1, b &lt;- phi2 ]
+
cnfAlg (Or phi1 phi2) = [ a \/ b | a <- phi1, b <- phi2 ]
   
instance (ToCNF f, ToCNF g) =&gt; ToCNF (f :+: g) where
+
instance (ToCNF f, ToCNF g) => ToCNF (f :+: g) where
 
cnfAlg (Inl phi) = cnfAlg phi
 
cnfAlg (Inl phi) = cnfAlg phi
cnfAlg (Inr phi) = cnfAlg phi</pre>
+
cnfAlg (Inr phi) = cnfAlg phi</haskell>
 
And we can now watch our formula get really large and ugly, as our running example illustrates:
 
And we can now watch our formula get really large and ugly, as our running example illustrates:
   
<pre>*Main&gt; texprint . cnf . prenex . skolemize
+
<haskell>*Main> texprint . cnf . prenex . skolemize
. pushNotInwards . elimImp $ foodFact</pre>
+
. pushNotInwards . elimImp $ foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& (\neg Person(x1) \vee Food(Skol2(x1))\vee \neg Food(x2)\vee Person(Skol6(x2))) \\
 
& (\neg Person(x1) \vee Food(Skol2(x1))\vee \neg Food(x2)\vee Person(Skol6(x2))) \\
Line 476: Line 552:
 
</math>
 
</math>
   
== Stage 7 &ndash; Implicative Normal Form ==
+
== Stage 7 Implicative Normal Form ==
   
There is one more step we can take to remove all those aethetically displeasing negations in the CNF result above, reaching the particularly elegant ''implicative normal form''. We just gather all negated literals and push them to left of an implicit implication arrow, i.e. utilize this equivalence:
+
There is one more step we can take to remove all those aethetically displeasing negations in the |CNF| result above, reaching the particularly elegant ''implicative normal form''. We just gather all negated literals and push them to left of an implicit implication arrow, i.e. utilize this equivalence:
   
  +
<math>\left( \neg t_1 \vee \cdots \vee \neg t_m \vee t_{m+1} \vee \cdots \vee t_n \right)
<blockquote>( t 1 t m t m+1 t n ) ( [t 1 t m] [t m+1 t n] )
 
  +
\Leftrightarrow
</blockquote>
 
  +
\left( [t_1 \wedge \cdots \wedge t_m] \Rightarrow [t_{m+1} \vee \cdots \vee t_n] \right)
<pre>data IClause = IClause -- implicit implication
 
  +
</math>
  +
  +
<haskell class="haskell">data IClause = IClause -- implicit implication
 
[Atom ()] -- implicit conjunction
 
[Atom ()] -- implicit conjunction
 
[Atom ()] -- implicit disjunction
 
[Atom ()] -- implicit disjunction
Line 488: Line 567:
 
type INF = [IClause] -- implicit conjuction
 
type INF = [IClause] -- implicit conjuction
   
inf :: CNF -&gt; INF
+
inf :: CNF -> INF
 
inf formula = map toImpl formula
 
inf formula = map toImpl formula
where toImpl disj = IClause [ Atom p ts | Inr (NAtom p ts) &lt;- disj ]
+
where toImpl disj = IClause [ Atom p ts | Inr (NAtom p ts) <- disj ]
[ t | Inl t@(Atom _ _ ) &lt;- disj ]</pre>
+
[ t | Inl t@(Atom _ _ ) <- disj ]</haskell>
This form is especially useful for a resolution procedure because one always resolves a term on the left of an IClause with a term on the right.
+
This form is especially useful for a resolution procedure because one always resolves a term on the left of an |IClause| with a term on the right.
   
<pre>*Main&gt; texprint . inf . cnf . prenex . skolemize
+
<haskell>*Main> texprint . inf . cnf . prenex . skolemize
. pushNotInwards . elimImp $ foodFact</pre>
+
. pushNotInwards . elimImp $ foodFact</haskell>
 
<math>\begin{array}{ll}
 
<math>\begin{array}{ll}
 
& ([Person(x1) \wedge Food(x2)] \Rightarrow [Food(Skol2(x1)) \vee Person(Skol6(x2))]) \\
 
& ([Person(x1) \wedge Food(x2)] \Rightarrow [Food(Skol2(x1)) \vee Person(Skol6(x2))]) \\
Line 506: Line 585:
 
== Voilà ==
 
== Voilà ==
   
Our running example has already been pushed all the way through, so now we can relax and enjoy writing normalize .
+
Our running example has already been pushed all the way through, so now we can relax and enjoy writing |normalize|.
   
<pre>normalize :: Formula Input -&gt; INF
+
<haskell class="haskell">normalize :: Formula Input -> INF
 
normalize =
 
normalize =
inf . cnf . prenex . skolemize . pushNotInwards . elimImp</pre>
+
inf . cnf . prenex . skolemize . pushNotInwards . elimImp</haskell>
 
== Remarks ==
 
== Remarks ==
   
Freely manipulating coproducts is a great way to make extensible data types as well as to express the structure of your data and computation. Though there is some syntactic overhead, it quickly becomes routine and readable. There does appear to be additional opportunity for scrapping boilerplate code. Ideally, we could elminate both the cases for uninteresting constructors and all the &ldquo;glue&rdquo; instances for the coproduct of two functors. Perhaps given more first-class manipulation of type classes and instances typeclasses we could express that a coproduct has only one reasonable implementation for ''any'' type class that is an implemention of a functor algebra, and never write an algebra instance for (:+:) again.
+
Freely manipulating coproducts is a great way to make extensible data types as well as to express the structure of your data and computation. Though there is some syntactic overhead, it quickly becomes routine and readable. There does appear to be additional opportunity for scrapping boilerplate code. Ideally, we could elminate both the cases for uninteresting constructors and all the �glue� instances for the coproduct of two functors. Perhaps given more first-class manipulation of type classes and instances {typeclasses} we could express that a coproduct has only one reasonable implementation for ''any'' type class that is an implemention of a functor algebra, and never write an algebra instance for |(:+:)| again.
   
Finally, Data Types à la Carte is not the only way to implement coproducts. In Objective Caml, polymorphic variants ocaml-variants serve a similar purpose, allowing free recombination of variant tags. The HList library hlist also provides an encoding of polymorphic variants in Haskell.
+
Finally, Data Types à la Carte is not the only way to implement coproducts. In Objective Caml, polymorphic variants {ocaml-variants} serve a similar purpose, allowing free recombination of variant tags. The HList library {hlist} also provides an encoding of polymorphic variants in Haskell.
   
 
== About the Author ==
 
== About the Author ==
Line 521: Line 600:
 
Kenneth Knowles is a graduate student at the University of California, Santa Cruz, studying type systems, concurrency, and parallel programming. He maintains a blog of mathematical musings in Haskell at http://kennknowles.com/blog
 
Kenneth Knowles is a graduate student at the University of California, Santa Cruz, studying type systems, concurrency, and parallel programming. He maintains a blog of mathematical musings in Haskell at http://kennknowles.com/blog
   
Kenn
+
{Kenn}
   
== Appendix &ndash; Printing ==
+
== Appendix Printing ==
   
We need to lift all the document operators into the freshness monad. I wrote all this starting with a pretty printer, so I just reuse the combinators and spit out TeX (which doesn&rsquo;t need to actually be pretty in source form).
+
We need to lift all the document operators into the freshness monad. I wrote all this starting with a pretty printer, so I just reuse the combinators and spit out TeX (which doesn�t need to actually be pretty in source form).
   
<pre>sepBy str = hsep . punctuate (text str)
+
<haskell class="haskell">sepBy str = hsep . punctuate (text str)
(&lt;++&gt;) = liftM2 (&lt;+&gt;)
+
(<++>) = liftM2 (<+>)
(&lt;--&gt;) = liftM2 (&lt;&gt;)
+
(<-->) = liftM2 (<>)
 
textM = return . text
 
textM = return . text
 
parensM = liftM parens
 
parensM = liftM parens
   
class Functor f =&gt; TeXAlg f where
+
class Functor f => TeXAlg f where
texAlg :: f (Supply Doc) -&gt; Supply Doc
+
texAlg :: f (Supply Doc) -> Supply Doc
   
instance TeXAlg Atom where
+
instance TeXAlg Atom where
 
texAlg (Atom p ts) = return . tex $ Const p ts
 
texAlg (Atom p ts) = return . tex $ Const p ts
   
instance TeXAlg NAtom where
+
instance TeXAlg NAtom where
texAlg (NAtom p ts) = textM &quot;\\neg&quot; &lt;++&gt; (return . tex $ Const p ts)
+
texAlg (NAtom p ts) = textM "\\neg" <++> (return . tex $ Const p ts)
   
instance TeXAlg TT where
+
instance TeXAlg TT where
texAlg _ = textM &quot;TT&quot;
+
texAlg _ = textM "TT"
   
 
instance TeXAlg FF where
 
instance TeXAlg FF where
texAlg _ = textM &quot;FF&quot;
+
texAlg _ = textM "FF"
   
instance TeXAlg Not where
+
instance TeXAlg Not where
texAlg (Not a) = textM &quot;\\neg&quot; &lt;--&gt; parensM a
+
texAlg (Not a) = textM "\\neg" <--> parensM a
   
instance TeXAlg Or where
+
instance TeXAlg Or where
texAlg (Or a b) = parensM a
+
texAlg (Or a b) = parensM a
&lt;++&gt; textM &quot;\\vee&quot;
+
<++> textM "\\vee"
&lt;++&gt; parensM b
+
<++> parensM b
   
instance TeXAlg And where
+
instance TeXAlg And where
texAlg (And a b) = parensM a
+
texAlg (And a b) = parensM a
&lt;++&gt; textM &quot;\\wedge&quot;
+
<++> textM "\\wedge"
&lt;++&gt; parensM b
+
<++> parensM b
   
instance TeXAlg Impl where
+
instance TeXAlg Impl where
texAlg (Impl a b) = parensM a
+
texAlg (Impl a b) = parensM a
&lt;++&gt; textM &quot;\\Rightarrow&quot;
+
<++> textM "\\Rightarrow"
&lt;++&gt; parensM b
+
<++> parensM b
   
instance TeXAlg Forall where
+
instance TeXAlg Forall where
texAlg (Forall t) = do uniq &lt;- fresh
+
texAlg (Forall t) = do uniq <- fresh
let name = &quot;x_{&quot; ++ show uniq ++ &quot;}&quot;
+
let name = "x_{" ++ show uniq ++ "}"
textM &quot;\\forall&quot;
+
textM "\\forall"
&lt;++&gt; textM name
+
<++> textM name
&lt;--&gt; textM &quot;.\\,&quot;
+
<--> textM ".\\,"
&lt;++&gt; parensM (t (Var name))
+
<++> parensM (t (Var name))
   
instance TeXAlg Exists where
+
instance TeXAlg Exists where
texAlg (Exists t) = do uniq &lt;- fresh
+
texAlg (Exists t) = do uniq <- fresh
let name = &quot;c_{&quot; ++ show uniq ++ &quot;}&quot;
+
let name = "c_{" ++ show uniq ++ "}"
textM &quot;\\exists&quot;
+
textM "\\exists"
&lt;++&gt; textM name
+
<++> textM name
&lt;--&gt; textM &quot;.\\,&quot;
+
<--> textM ".\\,"
&lt;++&gt; parensM (t (Var name))</pre>
+
<++> parensM (t (Var name))</haskell>
<pre>instance (TeXAlg f, TeXAlg g) =&gt; TeXAlg (f :+: g) where
+
<haskell class="haskell">instance (TeXAlg f, TeXAlg g) => TeXAlg (f :+: g) where
 
texAlg (Inl x) = texAlg x
 
texAlg (Inl x) = texAlg x
 
texAlg (Inr x) = texAlg x
 
texAlg (Inr x) = texAlg x
   
 
class TeX a where
 
class TeX a where
tex :: a -&gt; Doc
+
tex :: a -> Doc
   
instance TeXAlg f =&gt; TeX (Formula f) where
+
instance TeXAlg f => TeX (Formula f) where
tex formula = evalState
+
tex formula = evalState
(foldFormula texAlg formula)
+
(foldFormula texAlg formula)
 
initialUniqueSupply
 
initialUniqueSupply
   
instance (Functor f, TeXAlg f) =&gt; TeX (f ()) where
+
instance (Functor f, TeXAlg f) => TeX (f ()) where
tex x = evalState
+
tex x = evalState
(texAlg . fmap (const (textM &quot;&quot;)) $ x)
+
(texAlg . fmap (const (textM "")) $ x)
 
initialUniqueSupply
 
initialUniqueSupply
   
 
instance TeX CNF where
 
instance TeX CNF where
tex formula = sepBy &quot;\\wedge&quot;
+
tex formula = sepBy "\\wedge"
$ fmap (parens . sepBy &quot;\\vee&quot; . fmap tex) formula
+
$ fmap (parens . sepBy "\\vee" . fmap tex) formula
   
   
 
instance TeX IClause where
 
instance TeX IClause where
tex (IClause p q) = (brackets $ sepBy &quot;\\wedge&quot; $ fmap tex $ p)
+
tex (IClause p q) = (brackets $ sepBy "\\wedge" $ fmap tex $ p)
&lt;+&gt; text &quot;\\Rightarrow&quot;
+
<+> text "\\Rightarrow"
&lt;+&gt; (brackets $ sepBy &quot;\\vee&quot; $ fmap tex $ q)
+
<+> (brackets $ sepBy "\\vee" $ fmap tex $ q)
   
 
instance TeX INF where
 
instance TeX INF where
tex formula = sepBy &quot;\\wedge&quot; $ fmap (parens . tex) $ formula
+
tex formula = sepBy "\\wedge" $ fmap (parens . tex) $ formula
   
   
Line 615: Line 694:
 
tex (Var x) = text x
 
tex (Var x) = text x
 
tex (Const c []) = text c
 
tex (Const c []) = text c
tex (Const c args) = text c &lt;&gt; parens (sepBy &quot;,&quot; (fmap tex args))
+
tex (Const c args) = text c <> parens (sepBy "," (fmap tex args))
   
texprint :: TeX a =&gt; a -&gt; IO ()
+
texprint :: TeX a => a -> IO ()
texprint = putStrLn . render . tex</pre>
+
texprint = putStrLn . render . tex</haskell>

Latest revision as of 22:09, 20 September 2008

Classical first-order logic has the pleasant property that a formula can be reduced to an elegant implicative normal form through a series of syntactic simplifications. Using these transformations as a vehicle, this article demonstrates how to use Haskell�s type system, specifically a variation on Swierstra�s �Data Types à la Carte� method, to enforce the structural correctness property that these constructors are, in fact, eliminated by each stage of the transformation.

{-# LANGUAGE RankNTypes,TypeOperators,PatternSignatures #-}
{-# LANGUAGE UndecidableInstances,IncoherentInstances #-}
{-# LANGUAGE MultiParamTypeClasses,TypeSynonymInstances #-}
{-# LANGUAGE FlexibleContexts,FlexibleInstances #-}

import Text.PrettyPrint.HughesPJ
import Control.Monad.State
import Prelude hiding (or,and,not)

{LANGUAGE pragma and module imports}

First-Order Logic

Consider the optimistic statement �Every person has a heart.� If we were asked to write this formally in a logic or philosophy class, we might write the following formula of classical first-order logic:

Failed to parse (syntax error): {\displaystyle \forall p.\, Person(p) \Rightarrow \exists h.\, Heart(h) \wedge Has(p,h) \\ }

If asked to write the same property for testing by QuickCheck {quickcheck}, we might instead produce this code:

  heartFact :: Person -> Bool
  heartFact p = has p (heart p)
     where heart :: Person -> Heart
           ...

These look rather different. Ignoring how some of the predicates moved into our types, there are two other transformations involved. First, the universally quantified has been made a parameter, essentially making it a free variable of the formula. Second, the existentially quantified has been replaced by a function |heart| that, for any person, returns their heart. How did we know to encode things this way? We have performed first-order quantifier elimination in our heads!

This idea has an elegant instantiation for classical first-order logic which (along with some other simple transformations) yields a useful normal form for any formula. This article is a tour of the normalization process, implemented in Haskell, using a number of Haskell programming tricks. We will begin with just a couple of formal definitions, but quickly move on to �all code, all the time.�

First, we need the primitive set of terms , which are either variables or function symbols applied to a list of terms (constants are functions of zero arguments).

Next, we add atomic predicates over terms, and the logical constructions to combine atomic predicates. Since we are talking about classical logic, many constructs have duals, so they are presented side-by-side.

We will successively convert a closed (no free variables) first-order logic formula into a series of equivalent formulae, eliminating many of the above constructs. Eventually the result will be in implicative normal form, in which the placement of all the logical connectives is strictly dictated, such that it does not even require a recursive specification. Specifically, an implicative normal form is the conjunction of a set of implications, each of which has a conjunction of terms on the left and a disjunction of terms on the right:

The normal form may be very large compared to the input formula, but it is convenient for many purposes, such as using Prolog�s resolution procedure or an SMT (Satisfiability Modulo Theories) solver. The following process for normalizing a formula is described by Russell and Norvig {Russell2003} in seven steps:

  1. Eliminate implications.
  2. Move negations inwards.
  3. Standardize variable names.
  4. Eliminate existential quantification, reaching Skolem normal form.
  5. Eliminate universal quantification, reaching prenex formal form.
  6. Distribute boolean connectives, reaching conjunctive normal form.
  7. Gather negated atoms, reaching implicative normal form.

Keeping in mind the pattern of systematically simplifying the syntax of a formula, let us consider some Haskell data structures for representing first-order logic.

A Natural Encoding

Experienced Haskellers may translate the above definitions into the following Haskell data types immediately upon reading them:

data Term  =  Const String [Term]
           |  Var String

We will reuse the constructor names from |FOL| later, though, so this is not part of the code for the demonstration.

data FOL  =  Impl FOL FOL
          |  Atom String [Term]  | Not FOL
          |  TT                  | FF
          |  Or FOL FOL          | And FOL FOL
          |  Exists String FOL   | Forall String FOL

To make things more interesting, let us work with the formula representing the statement �If there is a person that eats every food, then there is no food that noone eats.�

foodFact =
  (Impl
    (Exists "p"
      (And  (Atom "Person" [Var "p"])
            (Forall "f"
              (Impl  (Atom "Food" [Var "f"])
                     (Atom "Eats" [Var "p", Var "f"])))))
    (Not (Exists "f"
           (And  (Atom "Food" [Var "f"])
                 (Not (Exists "p"
                        (And  (Atom "Person" [Var "p"])
                              (Atom "Eats" [Var "f"]))))))))

Higher-Order Abstract Syntax

While the above encoding is natural to write down, it has drawbacks for actual work. The first thing to notice is that we are using the |String| type to represent variables, and may have to carefully manage scoping. But what do variables range over? Terms. And Haskell already has variables that range over the data type |Term|, so we can re-use Haskell�s implementation; this technique is known as higher-order abstract syntax (HOAS).

data FOL  =  Impl FOL FOL
          |  Atom String [Term]    | Not FOL
          |  TT                    | FF
          |  Or  FOL FOL           | And FOL FOL
          |  Exists (Term -> FOL)  | Forall (Term -> FOL)

In a HOAS encoding, the binder of the object language (the quantifiers of first-order logic) are implemented using the binders of the metalanguage (Haskell). For example, where in the previous encoding we would represent as |Exists "x" (Const "P" [Var "x"])| we now represent it with |Exists ( -> (Const "P" [x]))|. And our example becomes:

foodFact =
  (Impl
    (Exists $ \p ->
      (And  (Atom "Person" [p])
            (Forall $ \f ->
              (Impl  (Atom "Food" [f])
                     (Atom "Eats" [p, f])))))
    (Not (Exists $ \f ->
      (And  (Atom "Food" [f])
            (Not (Exists $ \p ->
                   (And  (Atom "Person" [p])
                         (Atom "Eats" [f]))))))))

Since the variables |p| and |f| have taken the place of the |String| variable names, Haskell�s binding structure now ensures that we cannot construct a first-order logic formula with unbound variables, unless we use the |Var| constructor, which is still present because we will need it later. Another important benefit is that the type now expresses that the variables range over the |Term| data type, while before it was up to us to properly interpret the |String| variable names.

Modify the code of this article so that the |Var| constructor is not introduced until it is required in stage 5.

Data Types à la Carte

But even using this improved encoding, all our transformations will be of type |FOL -> FOL|. Because this type does not express the structure of the computation very precisely, we must rely on human inspection to ensure that each stage is written correctly. More importantly, we are not making manifest the requirement of certain stages that the prior stages� transformations have been performed. For example, our elimination of universal quantification is only a correct transformation when existentials have already been eliminated. A good goal for refining our type structure is to describe our data with types that reflect which connectives may be present.

Swierstra proposes a technique {dtalc} whereby a variant data type is built up by mixing and matching constructors of different functors using their coproduct |(:+:)|, which is the �smallest� functor containing both of its arguments.

data (f :+: g) a = Inl (f a) | Inr (g a)
infixr 6 :+:

instance (Functor f, Functor g) => Functor (f :+: g) where
  fmap f (Inl x) = Inl (fmap f x)
  fmap f (Inr x) = Inr (fmap f x)

The |:+:| constructor is like |Either| but it operates on functors. This difference is crucial � if |f| and |g| represent two constructors that we wish to combine into a larger recursive data type, then the type parameter |a| represents the type of their subformulae.

To work conveniently with coproducts, we define a type class |:<:| that implements subtyping by explicitly providing an injection from one of the constructors to the larger coproduct data type. There are some technical aspects to making sure current Haskell implementations can figure out the needed instances of |:<:|, but in this example we need only Swierstra�s original subsumption instances, found in Figure fig:Subsumption. For your own use of the technique, discussion on Phil Wadler�s blog {wadler-dtalc} and the Haskell-Cafe mailing list {haskell-cafe-dtalc} may be helpful.

class (Functor sub, Functor sup) => sub :<: sup where
     inj :: sub a -> sup a

instance Functor f => (:<:) f f where
     inj = id

instance (Functor f, Functor g) => (:<:) f (f :+: g) where
     inj = Inl

instance (Functor f, Functor g, Functor h, (f :<: g))
   => (:<:) f (h :+: g) where
     inj = Inr . inj

{Subsumption instances} (fig:Subsumption)

If the above seems a bit abstract or confusing, it will become very clear when we put it into practice. Let us immediately do so by encoding the constructors of first-order logic in this modular fashion.

data TT      a  = TT
data FF      a  = FF
data Atom    a  = Atom String [Term]
data Not     a  = Not a
data Or      a  = Or a a
data And     a  = And a a
data Impl    a  = Impl a a
data Exists  a  = Exists (Term -> a)
data Forall  a  = Forall (Term -> a)

Each constructor is parameterized by a type |a| of subformulae; |TT|, |FF|, and |Atom| do not have any subformulae so they ignore their parameter. Logical operations such as |And| have two subformulae. Correspondingly, the |And| constructor takes two arguments of type |a|.

The compound functor |Input| is now the specification of which constructors may appear in a first-order logic formula.

type Input =  TT :+: FF :+: Atom
              :+: Not :+: Or :+: And :+: Impl
              :+: Exists :+: Forall

The final step is to �tie the knot� with the following |Formula| data type, which generates a recursive formula over whatever constructors are present in its functor argument |f|.

data Formula f = In { out :: f (Formula f) }

If you have not seen this trick before, that definition may be hard to read and understand. Consider the types of |In| and |out|.

In   ::  f (Formula f) -> Formula f
out  ::  Formula f -> f (Formula f)

Observe that . This pair of inverses allows us to �roll� and �unroll� one layer of a formula in order to operate on the outermost constructor. Haskell does this same thing when you pattern-match against �normal� recursive data types. Like Haskell, we want to hide this rolling and unrolling. To hide the rolling, we define some helper constructors, found in Figure fig:FOLboilerplate, that inject a constructor into an arbitrary supertype, and then apply |In| to yield a |Formula|.

To hide the unrolling, we use the fact that a fixpoint of a functor comes with a fold operation, or catamorphism, which we will use to traverse a formula�s syntax. The function |foldFormula| takes as a parameter an algebra of the functor |f|. Intuitively, |algebra| tells us how to fold �one layer� of a formula, assuming all subformulae have already been processed. The fixpoint then provides the recursive structure of the computation once and for all.

foldFormula :: Functor f => (f a -> a) -> Formula f -> a
foldFormula algebra = algebra . fmap (foldFormula algebra) . out

We are already reaping some of the benefit of our �à la carte� technique: The boilerplate |Functor| instances in Figure fig:FOLboilerplate are not much larger than the code of |foldFormula| would have been, and they are defined modularly! Unlike a monolithic |foldFormula| implementation, this one function will work no matter which constructors are present. If the definition of |foldFormula| is unfamiliar, it is worth imagining a |Formula f| flowing through the three stages: First, |out| unrolls the formula one layer, then |fmap| recursively folds over all the subformulae. Finally, the results of the recursion are combined by |algebra|.

instance Functor TT      where fmap _ _                 = TT
instance Functor FF      where fmap _ _                 = FF
instance Functor Atom    where fmap _ (Atom p ts)       = Atom p ts
instance Functor Not     where fmap f (Not phi)         = Not (f phi)
instance Functor Or      where fmap f (Or phi1 phi2)    = Or (f phi1) (f phi2)
instance Functor And     where fmap f (And phi1 phi2)   = And (f phi1) (f phi2)
instance Functor Impl    where fmap f (Impl phi1 phi2)  = Impl (f phi1) (f phi2)
instance Functor Forall  where fmap f (Forall phi)      = Forall (f . phi)
instance Functor Exists  where fmap f (Exists phi)      = Exists (f . phi)

inject :: (g :<: f) => g (Formula f) -> Formula f
inject = In . inj

tt :: (TT :<: f) => Formula f
tt = inject TT

ff :: (FF :<: f) => Formula f
ff = inject FF

atom :: (Atom :<: f) => String -> [Term] -> Formula f
atom p ts = inject (Atom p ts)

not :: (Not :<: f) => Formula f -> Formula f
not = inject . Not

or :: (Or :<: f) => Formula f -> Formula f -> Formula f
or phi1 phi2 = inject (Or phi1 phi2)

and :: (And :<: f) => Formula f -> Formula f -> Formula f
and phi1 phi2 = inject (And phi1 phi2)

impl :: (Impl :<: f) => Formula f -> Formula f -> Formula f
impl phi1 phi2 = inject (Impl phi1 phi2)

forall :: (Forall :<: f) => (Term -> Formula f) -> Formula f
forall = inject . Forall

exists :: (Exists :<: f) => (Term -> Formula f) -> Formula f
exists = inject . Exists

{Boilerplate for First-Order Logic Constructors} (fig:FOLboilerplate)

Here is what our running example looks like with this encoding:

foodFact :: Formula Input
foodFact =  (exists $  \p  -> atom "Person" [p]
                 `and` (forall $ \f  -> atom "Food" [f]
                                     `impl` atom "Eats" [p,f]))
  `impl`
  (not (exists $ \f  -> atom "Food" [f]
                     `and` (not (exists $ \p  -> atom "Person" [p]
                                              `and` atom "Eats" [p,f]))))

A TeX pretty-printer is included as an appendix to this article. To make things readable, though, I�ll doctor its output into a nice table, and remove extraneous parentheses. But I won�t rewrite the variable names, since variables and binding are a key aspect of managing formulae. By convention, the printer uses for existentially quantified variables and for universally quantified variables.

*Main> texprint foodFact

Stage 1 � Eliminate Implications

The first transformation is an easy one, in which we �desugar� into . The high-level overview is given by the type and body of |elimImp|.

type Stage1 = TT :+: FF :+: Atom :+: Not :+: Or :+: And :+: Exists :+: Forall

elimImp :: Formula Input -> Formula Stage1
elimImp = foldFormula elimImpAlg

We take a formula containing all the constructors of first-order logic, and return a formula built without use of |Impl|. The way that |elimImp| does this is by folding the algebras |elimImpAlg| for each constructor over the recursive structure of a formula.

The function |elimImpAlg| we provide by making each constructor an instance of the |ElimImp| type class. This class specifies for a given constructor how to eliminate implications � for most constructors this is just the identity function, though we must rebuild an identical term to alter its type. Perhaps there is a way to use generic programming to eliminate the uninteresting cases.

class Functor f => ElimImp f where
  elimImpAlg :: f (Formula Stage1) -> Formula Stage1

instance ElimImp Impl where elimImpAlg (Impl phi1 phi2)  = (not phi1) `or` phi2

instance ElimImp TT      where elimImpAlg TT                = tt
instance ElimImp FF      where elimImpAlg FF                = ff
instance ElimImp Atom    where elimImpAlg (Atom p ts)       = atom p ts
instance ElimImp Not     where elimImpAlg (Not phi)         = not phi
instance ElimImp Or      where elimImpAlg (Or phi1 phi2)    = phi1 `or` phi2
instance ElimImp And     where elimImpAlg (And phi1 phi2)   = phi1 `and` phi2
instance ElimImp Exists  where elimImpAlg (Exists phi)      = exists phi
instance ElimImp Forall  where elimImpAlg (Forall phi)      = forall phi

We extend |ElimImp| in the natural way over coproducts, so that whenever all our constructors are members of the type class, then their coproduct is as well.

instance (ElimImp f, ElimImp g) => ElimImp (f :+: g) where
  elimImpAlg (Inr phi) = elimImpAlg phi
  elimImpAlg (Inl phi) = elimImpAlg phi

Our running example is now

*Main> texprint . elimImp $ foodFact

Design a solution where only the |Impl| case of |elimImpAlg| needs to be written.

Stage 2 � Move Negation Inwards

Now that implications are gone, we are left with entirely symmetrical constructions, and can always push negations in or out using duality: Failed to parse (syntax error): {\displaystyle \neg(\neg \phi) \Leftrightarrow \phi \\ \neg(\phi_1 \wedge \phi_2) \Leftrightarrow \neg\phi_1 \vee \neg\phi_2 \\ \neg(\phi_1 \vee \phi_2) \Leftrightarrow \neg\phi_1 \wedge \neg\phi_2 \\ \neg(\exists x.\, \phi) \Leftrightarrow \forall x.\, \neg\phi \\ \neg(\forall x.\, \phi) \Leftrightarrow \exists x.\, \neg\phi }

Our eventual goal is to move negation all the way inward so it is only applied to atomic predicates. To express this structure in our types, we define a new constructor for negated atomic predicates as well as the type for the output of Stage 2:

data NAtom a = NAtom String [Term]

instance Functor NAtom where fmap f (NAtom p ts) = NAtom p ts

natom :: (NAtom :<: f) => String -> [Term] -> Formula f
natom p ts = inject (NAtom p ts)

type Stage2 =  TT :+: FF :+: Atom
               :+: NAtom
               :+: Or :+: And
               :+: Exists :+: Forall

One could imagine implementing duality with a multi-parameter type class that records exactly the dual of each constructor, as

class (Functor f, Functor g) => Dual f g where
  dual :: f a -> g a

Unfortunately, this leads to a situation where our subtyping must use the commutativity of coproducts, which it is incapable of doing as written. For this article, let us just define an algebra to dualize a whole formula at a time.

dualize :: Formula Stage2 -> Formula Stage2
dualize = foldFormula dualAlg

class Functor f => Dualize f where
  dualAlg :: f (Formula Stage2) -> Formula Stage2

instance Dualize TT      where dualAlg TT               = ff
instance Dualize FF      where dualAlg FF               = tt
instance Dualize Atom    where dualAlg (Atom p ts)      = natom p ts
instance Dualize NAtom   where dualAlg (NAtom p ts)     = atom p ts
instance Dualize Or      where dualAlg (Or phi1 phi2)   = phi1 `and`  phi2
instance Dualize And     where dualAlg (And phi1 phi2)  = phi1 `or`   phi2
instance Dualize Exists  where dualAlg (Exists phi)     = forall phi
instance Dualize Forall  where dualAlg (Forall phi)     = exists phi

instance (Dualize f, Dualize g) => Dualize (f :+: g) where
  dualAlg (Inl phi) = dualAlg phi
  dualAlg (Inr phi) = dualAlg phi

Now perhaps the pattern of these transformations is becoming clear. It is remarkably painless, involving just a little type class syntax as overhead, to write these functor algebras. The definition of |pushNotInwards| is another straightforward fold, with a helper type class |PushNot|. I�ve separated the instance for |Not| since it is the only one that does anything.

class Functor f => PushNot f where
  pushNotAlg :: f (Formula Stage2) -> Formula Stage2

instance PushNot Not    where pushNotAlg (Not phi)        = dualize phi

instance PushNot TT     where pushNotAlg TT               = tt
instance PushNot FF     where pushNotAlg FF               = ff
instance PushNot Atom   where pushNotAlg (Atom p ts)      = atom p ts
instance PushNot Or     where pushNotAlg (Or phi1 phi2)   = phi1 `or`   phi2
instance PushNot And    where pushNotAlg (And phi1 phi2)  = phi1 `and`  phi2
instance PushNot Exists where pushNotAlg (Exists phi)     = exists phi
instance PushNot Forall where pushNotAlg (Forall phi)     = forall phi

instance (PushNot f, PushNot g) => PushNot (f :+: g) where
  pushNotAlg (Inr phi) = pushNotAlg phi
  pushNotAlg (Inl phi) = pushNotAlg phi

All we have to do now is define a fold that calls |pushNotAlg|.

pushNotInwards :: Formula Stage1 -> Formula Stage2
pushNotInwards = foldFormula pushNotAlg

Our running example now becomes:

*Main> texprint . pushNotInwards . elimImp $ foodFact

Instead of the |NAtom| constructor, define the composition of two functors |f `O` g| and re-write |Stage2 = TT :+: FF :+: Atom :+: (Not `O` Atom) :+: Or :+: And :+: Exists :+: Forall|

Encode a form of subtyping that can reason using commutativity of coproducts, and rewrite the |Dualize| algebra using a two-parameter |Dual| type class as described above.

Stage 3 � Standardize variable names

To �standardize� variable names means to choose nonconflicting names for all the variables in a formula. Since we are using higher-order abstract syntax, Haskell is handling name conflicts for now. We can immediately jump to stage 4!

Stage 4 � Skolemization

It is interesting to arrive at the definition of Skolemization via the Curry-Howard correspondence. You may be familiar with the idea that terms of type |a -> b| are proofs of the proposition � implies �, assuming |a| and |b| are interpreted as propositions as well. This rests on a notion that a proof of |a -> b| must be some process that can take a proof of |a| and generate a proof of |b|, a very computational notion. Rephrasing the above, a function of type |a -> b| is a guarantee that for all elements of type |a|, there exists a corresponding element of type |b|. So a function type expresses an alternation of a universal quantifier with an existential. We will use this to replace all the existential quantifiers with freshly-generated functions. We can of course, pass a unit type to a function, or a tuple of many arguments, to have as many universal quantifiers as we like.

Suppose we have , then in general there may be many choices for , given a particular and . By the axiom of choice, we can create a function that associates each pair with a corresponding arbitrarily, and then rewrite the above formula as . Technically, this formula is only equisatisfiable, but by convention I�m assuming constants to be existentially quantified.

So we need to traverse the syntax tree gathering free variables and replacing existentially quantified variables with functions of a fresh name. Since we are eliminating a binding construct, we now need to reason about fresh unique names.

Today�s formulas are small, so let us use a naïve and wasteful splittable unique identifier supply. Our supply is an infinite binary tree, where moving left prepends a |0| to the bit representation of the current counter, while moving right prepends a |1|. Hence, the left and right subtrees are both infinite, nonoverlapping supplies of identifiers. The code for our unique identifier supplies is in Figure fig:unq.

Launchbury and Peyton-Jones {launchbury95state} have discussed how to use the |ST| monad to implement a much more sophisticated and space-efficient version of the same idea.

type Unique = Int
data UniqueSupply = UniqueSupply Unique UniqueSupply UniqueSupply

initialUniqueSupply :: UniqueSupply
initialUniqueSupply = genSupply 1
  where genSupply n = UniqueSupply n  (genSupply (2*n))
                                      (genSupply (2*n+1))

splitUniqueSupply :: UniqueSupply -> (UniqueSupply, UniqueSupply)
splitUniqueSupply (UniqueSupply _ l r) = (l,r)

getUnique :: UniqueSupply -> (Unique, UniqueSupply)
getUnique (UniqueSupply n l r) = (n,l)

type Supply a = State UniqueSupply a

fresh :: Supply Int
fresh = do  supply <- get
            let (uniq,rest) = getUnique supply
            put rest
            return uniq

freshes :: Supply UniqueSupply
freshes = do  supply <- get
              let (l,r) = splitUniqueSupply supply
              put r
              return l

{Unique supplies} (fig:unq)

The helper algebra for Skolemization is more complex than before because a |Formula Stage2| is not directly transformed into |Formula Stage4| but into a function from its free variables to a new formula. On top of that, the final computation takes place in the |Supply| monad because of the need to generate fresh names for Skolem functions. Correspondingly, we choose the return type of the algebra to be |[Term] -> Supply (Formula Stage4)|. Thankfully, many instances are just boilerplate.

type Stage4 = TT :+: FF :+: Atom :+: NAtom :+: Or :+: And :+: Forall

class Functor f => Skolem f where
  skolemAlg ::  f ([Term] -> Supply (Formula Stage4))
                -> [Term] -> Supply (Formula Stage4)

instance Skolem TT where
  skolemAlg TT               xs = return tt
instance Skolem FF where
  skolemAlg FF               xs = return ff
instance Skolem Atom where
  skolemAlg (Atom p ts)      xs = return (atom p ts)
instance Skolem NAtom where
  skolemAlg (NAtom p ts)     xs = return (natom p ts)
instance Skolem Or where
  skolemAlg (Or phi1 phi2)   xs = liftM2 or  (phi1 xs) (phi2 xs)
instance Skolem And where
  skolemAlg (And phi1 phi2)  xs = liftM2 and (phi1 xs) (phi2 xs)

instance (Skolem f, Skolem g) => Skolem (f :+: g) where
  skolemAlg (Inr phi) = skolemAlg phi
  skolemAlg (Inl phi) = skolemAlg phi

In the case for a universal quantifier , any existentials contained within are parameterized by the variable , so we add to the list of free variables and Skolemize the body . Implementing this in Haskell, the algebra instance must be a function from |Forall (Term -> [Term] -> Supply (Formula Stage4))| to |[Term] -> Supply (Forall (Term -> Formula Stage4))|, which involves some juggling of the unique supply.

instance Skolem Forall where
  skolemAlg (Forall phi) xs =
    do  supply <- freshes
        return (forall $ \x -> evalState (phi x (x:xs)) supply)

From the recursive result |phi|, we need to construct a new body for the |forall| constructor that has a pure body: It must not run in the |Supply| monad. Yet the body must contain only names that do not conflict with those used in the rest of this fold. This is why we need a moderately complex |UniqueSupply| data structure: To break off a disjoint-yet-infinite supply for use by |evalState| in the body of a |forall|, restoring purity to the body by running the |Supply| computation to completion.

Finally, the key instance for existentials is actually quite simple � just generate a fresh name and apply the Skolem function to all the arguments |xs|. The application |phi (Const name xs)| is how we express replacement of the existentially bound term with |Const name xs| with higher-order abstract syntax.

instance Skolem Exists where
  skolemAlg (Exists phi) xs =
    do  uniq <- fresh
        phi (Const ("Skol" ++ show uniq) xs) xs

After folding the Skolemization algebra over a formula, Since we are assuming the formula is closed, we apply the result of folding |skolemAlg| to the empty list of free variables. Then the resulting |Supply (Formula Stage4)| computation is run to completion starting with the |initialUniqueSupply|.

skolemize :: Formula Stage2 -> Formula Stage4
skolemize formula = evalState (foldResult []) initialUniqueSupply
  where  foldResult :: [Term] -> Supply (Formula Stage4)
         foldResult = foldFormula skolemAlg formula

And the output is starting to get interesting:

*Main> texprint . skolemize . pushNotInwards . elimImp $ foodFact

In the first line, maps a person to a food they don�t eat. In the second line, maps a food to a person who doesn�t eat it.

In the definition of |skolemAlg|, we use |liftM2| to thread the |Supply| monad through the boring cases, but the |(->) [Term]| monad is managed manually. Augment the |(->) [Term]| monad to handle the |Forall| and |Exists| cases, and then combine this monad with |Supply| using either |StateT| or the monad coproduct {monad-coproduct}.

Stage 5 � Prenex Normal Form

Now that all the existentials have been eliminated, we can also eliminate the universally quantified variables. A formula is in prenex normal form when all the quantifiers have been pushed to the outside of other connectives. We have already removed existential quantifiers, so we are dealing only with universal quantifiers. As long as variable names don�t conflict, we can freely push them as far out as we like and commute all binding sites. By convention, free variables are universally quantifed, so a formula is valid if and only if the body of its prenex form is valid. Though this may sound technical, all we have to do to eliminate universal quantification is choose fresh names for all the variables and forget about their binding sites.

type Stage5 = TT :+: FF :+: Atom :+: NAtom :+: Or :+: And

prenex :: Formula Stage4 -> Formula Stage5
prenex formula =  evalState  (foldFormula prenexAlg formula)
                             initialUniqueSupply

class Functor f => Prenex f where
  prenexAlg :: f (Supply (Formula Stage5)) -> Supply (Formula Stage5)

instance Prenex Forall where
  prenexAlg (Forall phi) = do  uniq <- fresh
                               phi (Var ("x" ++ show uniq))

instance Prenex TT where
  prenexAlg TT               = return tt
instance Prenex FF where
  prenexAlg FF               = return ff
instance Prenex Atom where
  prenexAlg (Atom p ts)      = return (atom p ts)
instance Prenex NAtom where
  prenexAlg (NAtom p ts)     = return (natom p ts)
instance Prenex Or where
  prenexAlg (Or phi1 phi2)   = liftM2 or phi1 phi2
instance Prenex And where
  prenexAlg (And phi1 phi2)  = liftM2 and phi1 phi2

instance (Prenex f, Prenex g) => Prenex (f :+: g) where
  prenexAlg (Inl phi) = prenexAlg phi
  prenexAlg (Inr phi) = prenexAlg phi

Since prenex is just forgetting the binders, our example is mostly unchanged.

*Main> texprint . prenex . skolemize . pushNotInwards
                . elimImp $ foodFact

Stage 6 � Conjunctive Normal Form

Now all we have left is possibly-negated atomic predicates connected by and . This second-to-last stage distributes these over each other to reach a canonical form with all the conjunctions at the outer layer, and all the disjunctions in the inner layer.

At this point, we no longer have a recursive type for formulas, so there�s not too much point to re-using the old constructors.

type Literal  = (Atom :+: NAtom) ()
type Clause   = [Literal]  -- implicit disjunction
type CNF      = [Clause]   -- implicit conjunction

(\/) :: Clause -> Clause -> Clause
(\/) = (++)

(/\) :: CNF -> CNF -> CNF
(/\) = (++)
cnf :: Formula Stage5 -> CNF
cnf = foldFormula cnfAlg

class Functor f => ToCNF f where
  cnfAlg :: f CNF -> CNF

instance ToCNF TT where
  cnfAlg TT               = []
instance ToCNF FF where
  cnfAlg FF               = [[]]
instance ToCNF Atom where
  cnfAlg (Atom p ts)      = [[inj (Atom p ts)]]
instance ToCNF NAtom where
  cnfAlg (NAtom p ts)     = [[inj (NAtom p ts)]]
instance ToCNF And where
  cnfAlg (And phi1 phi2)  = phi1 /\ phi2
instance ToCNF Or where
  cnfAlg (Or phi1 phi2)   = [ a \/ b |  a <- phi1, b <- phi2 ]

instance (ToCNF f, ToCNF g) => ToCNF (f :+: g) where
  cnfAlg (Inl phi) = cnfAlg phi
  cnfAlg (Inr phi) = cnfAlg phi

And we can now watch our formula get really large and ugly, as our running example illustrates:

*Main> texprint . cnf . prenex . skolemize
                . pushNotInwards . elimImp $ foodFact

Stage 7 � Implicative Normal Form

There is one more step we can take to remove all those aethetically displeasing negations in the |CNF| result above, reaching the particularly elegant implicative normal form. We just gather all negated literals and push them to left of an implicit implication arrow, i.e. utilize this equivalence:

data IClause =  IClause      -- implicit implication
                  [Atom ()]  -- implicit conjunction
                  [Atom ()]  -- implicit disjunction

type INF = [IClause]         -- implicit conjuction

inf :: CNF -> INF
inf formula = map toImpl formula
    where toImpl disj = IClause  [ Atom p ts  | Inr (NAtom p ts)    <- disj ]
                                 [ t          | Inl t@(Atom  _ _ )  <- disj ]

This form is especially useful for a resolution procedure because one always resolves a term on the left of an |IClause| with a term on the right.

*Main> texprint . inf . cnf . prenex . skolemize
                . pushNotInwards . elimImp $ foodFact

Voilà

Our running example has already been pushed all the way through, so now we can relax and enjoy writing |normalize|.

normalize :: Formula Input -> INF
normalize =
  inf . cnf . prenex . skolemize . pushNotInwards . elimImp

Remarks

Freely manipulating coproducts is a great way to make extensible data types as well as to express the structure of your data and computation. Though there is some syntactic overhead, it quickly becomes routine and readable. There does appear to be additional opportunity for scrapping boilerplate code. Ideally, we could elminate both the cases for uninteresting constructors and all the �glue� instances for the coproduct of two functors. Perhaps given more first-class manipulation of type classes and instances {typeclasses} we could express that a coproduct has only one reasonable implementation for any type class that is an implemention of a functor algebra, and never write an algebra instance for |(:+:)| again.

Finally, Data Types à la Carte is not the only way to implement coproducts. In Objective Caml, polymorphic variants {ocaml-variants} serve a similar purpose, allowing free recombination of variant tags. The HList library {hlist} also provides an encoding of polymorphic variants in Haskell.

About the Author

Kenneth Knowles is a graduate student at the University of California, Santa Cruz, studying type systems, concurrency, and parallel programming. He maintains a blog of mathematical musings in Haskell at http://kennknowles.com/blog

{Kenn}

Appendix � Printing

We need to lift all the document operators into the freshness monad. I wrote all this starting with a pretty printer, so I just reuse the combinators and spit out TeX (which doesn�t need to actually be pretty in source form).

sepBy str = hsep . punctuate (text str)
(<++>) = liftM2 (<+>)
(<-->) = liftM2 (<>)
textM = return . text
parensM = liftM parens

class Functor f => TeXAlg f where
  texAlg :: f (Supply Doc) -> Supply Doc

instance TeXAlg Atom where
  texAlg (Atom p ts)   = return . tex $ Const p ts

instance TeXAlg NAtom where
  texAlg (NAtom p ts)  = textM "\\neg" <++> (return . tex $ Const p ts)

instance TeXAlg TT where
  texAlg _             = textM "TT"

instance TeXAlg FF where
  texAlg _             = textM "FF"

instance TeXAlg Not where
  texAlg (Not a)       = textM "\\neg" <--> parensM a

instance TeXAlg Or where
  texAlg (Or a b)      =  parensM a
                          <++> textM "\\vee"
                          <++> parensM b

instance TeXAlg And where
  texAlg (And a b)     =  parensM a
                          <++> textM "\\wedge"
                          <++> parensM b

instance TeXAlg Impl where
  texAlg (Impl a b)    =  parensM a
                          <++> textM "\\Rightarrow"
                          <++> parensM b

instance TeXAlg Forall where
  texAlg (Forall t)    = do  uniq <- fresh
                             let name = "x_{" ++ show uniq ++ "}"
                             textM "\\forall"
                               <++> textM name
                               <--> textM ".\\,"
                               <++> parensM (t (Var name))

instance TeXAlg Exists where
  texAlg (Exists t)    = do  uniq <- fresh
                             let name = "c_{" ++ show uniq ++ "}"
                             textM "\\exists"
                                <++> textM name
                                <--> textM ".\\,"
                                <++> parensM (t (Var name))
instance (TeXAlg f, TeXAlg g) => TeXAlg (f :+: g) where
  texAlg (Inl x)  = texAlg x
  texAlg (Inr x)  = texAlg x

class TeX a where
    tex :: a -> Doc

instance TeXAlg f => TeX (Formula f) where
  tex formula = evalState
                  (foldFormula texAlg formula)
                  initialUniqueSupply

instance (Functor f, TeXAlg f) => TeX (f ()) where
  tex x = evalState
            (texAlg . fmap (const (textM "")) $ x)
            initialUniqueSupply

instance TeX CNF where
  tex formula = sepBy "\\wedge"
                $ fmap (parens . sepBy "\\vee" . fmap tex) formula


instance TeX IClause where
    tex (IClause p q) = (brackets $ sepBy "\\wedge" $ fmap tex $ p)
                        <+> text "\\Rightarrow"
                        <+> (brackets $ sepBy "\\vee" $ fmap tex $ q)

instance TeX INF where
    tex formula = sepBy "\\wedge" $ fmap (parens . tex) $ formula


instance TeX Term where
    tex (Var x)         = text x
    tex (Const c [])    = text c
    tex (Const c args)  = text c <> parens (sepBy "," (fmap tex args))

texprint :: TeX a => a -> IO ()
texprint = putStrLn . render . tex