
I don't quite understand how this would work. For example, would it work
for these examples?
do x <- blah
let foo = return
foo (f x) -- Using an alias of return/pure
do x <- Just blah
Just (f x) -- another form of aliasing
do x <- blah
return (g x x) -- could perhaps be turned into:
-- (\x -> g x x) <$> blah
do x <- blah
y <- return x
return (f y) -- = f <$> blah ?
do x1 <- foo1 -- effect order must not be reversed
x2 <- foo2
return (f x2 x1) -- note reversed order
-- multiple uses of applicative
do x1 <- foo1
y <- return (f x1)
x2 <- foo2
y2 <- return (g y x2)
return y2
So I guess it's possible to detect the pattern:
do x1 <- foo1; ...; xN <- fooN; [res <-] return (f {x1..xN})
where {x1..xN} means "x1..xN" in some order" and turn it into:
do [res <-] (\x1..xN -> f {x1..xN}) <$> foo1 <*> ... <*> fooN
Open issues would be detection of the correct "return"-like thing. This is
why using monad comprehensions would help somewhat, but not fully because
it's still possible to put "x <- return y" in the generators part. The
current desugaring of do-notation is very simple because it doesn't even
need to know about the monad laws. They are used implicitly by the
optimiser (e.g., "foo >>= \x -> return x" is optimised to just "foo" after
inlining), but the desugarer doesn't need to know about them.
On 4 September 2011 03:34, Daniel Peebles
Hi all,
I was wondering what people thought of a smarter do notation. Currently, there's an almost trivial desugaring of do notation into (>>=), (>>), and fail (grr!) which seem to naturally imply Monads (although oddly enough, return is never used in the desugaring). The simplicity of the desugaring is nice, but in many cases people write monadic code that could easily have been Applicative.
For example, if I write in a do block:
x <- action1 y <- action2 z <- action3 return (f x y z)
that doesn't require any of the context-sensitivty that Monads give you, and could be processed a lot more efficiently by a clever Applicative instance (a parser, for instance). Furthermore, if return values are ignored, we could use the (<$), (<*), or (*>) operators which could make the whole thing even more efficient in some instances.
Of course, the fact that the return method is explicitly mentioned in my example suggests that unless we do some real voodoo, Applicative would have to be a superclass of Monad for this to make sense. But with the new default superclass instances people are talking about in GHC, that doesn't seem too unlikely in the near future.
On the implementation side, it seems fairly straightforward to determine whether Applicative is enough for a given do block. Does anyone have any opinions on whether this would be a worthwhile change? The downsides seem to be a more complex desugaring pass (although still something most people could perform in their heads), and some instability with making small changes to the code in a do block. If you make a small change to use a variable before the return, you instantly jump from Applicative to Monad and might break types in your program. I'm not convinced that's necessary a bad thing, though.
Any thoughts?
Thanks, Dan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Push the envelope. Watch it bend.