
Hi again,
I think the following rules capture what Max's program does if applied after
the usual desugaring of do-notation:
a >>= \p -> return b
-->
(\p -> b) <$> a
a >>= \p -> f <$> b -- 'free p' and 'free b' disjoint
-->
((\p -> f) <$> a) <*> b
a >>= \p -> f <$> b -- 'free p' and 'free f' disjoint
-->
f <$> (a >>= \p -> b)
a >>= \p -> b <*> c -- 'free p' and 'free c' disjoint
-->
(a >>= \p -> b) <*> c
a >>= \p -> b >>= \q -> c -- 'free p' and 'free b' disjoint
-->
liftA2 (,) a b >>= \(p,q) -> c
a >>= \p -> b >> c -- 'free p' and 'free b' disjoint
-->
(a << b) >>= \p -> c
The second and third rule overlap and should be applied in this order.
'free' gives all free variables of a pattern 'p' or an expression
'a','b','c', or 'f'.
If return, >>, and << are defined in Applicative, I think the rules also
achieve the minimal necessary class constraint for Thomas's examples that do
not involve aliasing of return.
Sebastian
On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer
Hi Max,
thanks for you proposal!
Using the Applicative methods to optimise "do" desugaring is still
possible, it's just not that easy to have that weaken the generated constraint from Monad to Applicative since only degenerate programs like this one won't use a Monad method:
Is this still true, once Monad is a subclass of Applicative which defines return?
I'd still somewhat prefer if return get's merged with the preceding statement so sometimes only a Functor constraint is generated but I think, I should adjust your desugaring then..
Sebastian