<div>The problem in the parallel distribution of monadic computations that may have been Applicative seems to be the >> operator</div><div><br></div><div>But if Monad is defined as a subclass of applicative:</div><div>
<br></div><a href="http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal">http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal</a><div><br></div><div>then ">>" can be defined as (>>) = (*>) and parallelization should be pòssible ?</div>
<div><br></div><div>Alberto<br><br><div class="gmail_quote">2011/9/5 Sebastian Fischer <span dir="ltr"><<a href="mailto:fischer@nii.ac.jp">fischer@nii.ac.jp</a>></span><br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
Hi again,<div><br></div><div>I think the following rules capture what Max's program does if applied after the usual desugaring of do-notation:</div><div><br></div><div><div>a >>= \p -> return b</div><div> --></div>
<div>(\p -> b) <$> a</div><div><br></div><div>a >>= \p -> f <$> b -- 'free p' and 'free b' disjoint</div><div> --></div><div>((\p -> f) <$> a) <*> b</div><div><br>
</div><div>a >>= \p -> f <$> b -- 'free p' and 'free f' disjoint</div><div> --></div><div>f <$> (a >>= \p -> b)</div><div><br></div><div>a >>= \p -> b <*> c -- 'free p' and 'free c' disjoint</div>
<div> --></div><div>(a >>= \p -> b) <*> c</div><div><br></div><div>a >>= \p -> b >>= \q -> c -- 'free p' and 'free b' disjoint</div><div> --></div><div>liftA2 (,) a b >>= \(p,q) -> c</div>
<div><br></div><div>a >>= \p -> b >> c -- 'free p' and 'free b' disjoint</div><div> --></div><div>(a << b) >>= \p -> c</div><div><br></div><div>The second and third rule overlap and should be applied in this order. 'free' gives all free variables of a pattern 'p' or an expression 'a','b','c', or 'f'.</div>
<div><br></div><div>If return, >>, and << are defined in Applicative, I think the rules also achieve the minimal necessary class constraint for Thomas's examples that do not involve aliasing of return.</div>
<div><br></div><font color="#888888"><div>Sebastian</div></font><div><div></div><div class="h5"><br><div class="gmail_quote">On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer <span dir="ltr"><<a href="mailto:fischer@nii.ac.jp" target="_blank">fischer@nii.ac.jp</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi Max, <div><br></div><div>thanks for you proposal!<br><br><div class="gmail_quote"><div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div>Using the Applicative methods to optimise "do" desugaring is still</div>
possible, it's just not that easy to have that weaken the generated<br>
constraint from Monad to Applicative since only degenerate programs<br>
like this one won't use a Monad method:<br></blockquote><div><br></div></div><div>Is this still true, once Monad is a subclass of Applicative which defines return?</div><div><br></div><div>I'd still somewhat prefer if return get's merged with the preceding statement so sometimes only a Functor constraint is generated but I think, I should adjust your desugaring then..</div>
<div><br></div><font color="#888888"><div>Sebastian</div></font></div><br></div>
</blockquote></div><br></div></div></div>
<br>_______________________________________________<br>
Haskell-Cafe mailing list<br>
<a href="mailto:Haskell-Cafe@haskell.org">Haskell-Cafe@haskell.org</a><br>
<a href="http://www.haskell.org/mailman/listinfo/haskell-cafe" target="_blank">http://www.haskell.org/mailman/listinfo/haskell-cafe</a><br>
<br></blockquote></div><br></div>