Re: Make it possible to evaluate monadic actions when assigning record fields

Monads are a part of Haskell. The more tiresome monads are to use, the more tiresome Haskell is to use. I suggest we leave the decision of where and when to use them to each individual user of the language. /Adde
In any case, I'm *strongly against* further syntactic sugar for monads, including #1518. The more tiresome monads are, the more incentive you have to avoid them.
Regards, apfelmus

Adde wrote:
apfelmus wrote:
In any case, I'm *strongly against* further syntactic sugar for monads, including #1518. The more tiresome monads are, the more incentive you have to avoid them.
Monads are a part of Haskell. The more tiresome monads are to use, the more tiresome Haskell is to use. I suggest we leave the decision of where and when to use them to each individual user of the language.
Well, only the monads will remain as "tiresome" as they are now. Also, the most intriguing fact about monads (or rather about Haskell) is that they are not a (built-in) part of the language, they are "just" a type class. Sure, there is do-notation, but >>= is not much clumsier than that. In the end, I think that applicatively used monads are the wrong abstraction. For occasional use, liftM2 and `ap` often suffice. If the applicative style becomes prevalent, then Applicative Functors are likely to be the conceptually better choice. This is especially true for MonadReader. Arithmetic expressions are a case for liftM, too. And an instance (Monad m, Num a) => Num (m a) allows to keep infix (+) and (*). Put differently, I don't see a compelling use-case for the proposed syntax extension. But I've seen many misused monads. Regards, apfelmus

| In the end, I think that applicatively used monads are the wrong | abstraction. For occasional use, liftM2 and `ap` often suffice. If the | applicative style becomes prevalent, then Applicative Functors are | likely to be the conceptually better choice. This is especially true | for | MonadReader. Arithmetic expressions are a case for liftM, too. And an | instance (Monad m, Num a) => Num (m a) allows to keep infix (+) and | (*). | | Put differently, I don't see a compelling use-case for the proposed | syntax extension. But I've seen many misused monads. Can you be more explicit? Monadic code is often over-linearised. I want to generate fresh names, say, and suddenly I have to name sub-expressions. Not all sub-expressions, just the effectful ones. It'a a pain to define liftM_yes_no_yes which takes an effectful argument in first and third position, and a non-effectful one as the second arg: liftM_yes_no_yes :: (a->b->c->m d) -> m a -> b -> m c -> m d What a pain. So we have either do { ...; va <- a; vc <- c; f va b vc; ... } or do { ...; liftM_yes_no_yes f a b c; ...} or, with some syntactic sugar... do { ...; f $(a) b $(c); ...} The liftM solution is even more awkward if I want f (g $(a)) b c for example. I'm thinking of this as a very superficial piece of syntactic sugar, aimed at avoiding the excessive linearization of monadic code. Nothing deep. Of course adding more syntactic sugar has a cost; but this one looks like having a good power to weight ratio. Simon

apfelmus wrote:
In the end, I think that applicatively used monads are the wrong abstraction.
Simon Peyton-Jones wrote:
Can you be more explicit? Monadic code is often over-linearised. I want to generate fresh names, say, and suddenly I have to name sub-expressions. Not all sub-expressions, just the effectful ones.
Neil Mitchell wrote:
The monad in question simply supplies free variables, so could be applied in any order.
I see, the dreaded name-supply problem. Well, it just seems that monads are not quite the right abstraction for that one, right? (Despite that monads make up a good implementation). In other words, my opinion is that it's not the monadic code that is over-linearized but the code that is over-monadized. The main property of a "monad" for name-supply is of course f >> g = g >> f modulo alpha-conversion. Although we have to specify an order, it's completely immaterial. There _has_ to be a better abstraction than "monad" to capture this! SPJ:
It'a a pain to define liftM_yes_no_yes which takes an effectful argument in first and third position, and a non-effectful one as the second arg:
liftM_yes_no_yes :: (a->b->c->m d) -> m a -> b -> m c -> m d
What a pain. So we have either
do { ...; va <- a; vc <- c; f va b vc; ... }
or do { ...; liftM_yes_no_yes f a b c; ...}
or, with some syntactic sugar...
do { ...; f $(a) b $(c); ...}
The liftM solution is even more awkward if I want
f (g $(a)) b c
for example.
(the last one is already a typo, i guess you mean f $(g $(a)) b c) Neil:
-- helpers, ' is yes, _ is no
coreLet__ x y = f $ CoreLet x y coreLet_' x y = f . CoreLet x =<< y
coreLet x y = f $ CoreLet x y
f (CoreApp (CoreLet bind xs) ys) = coreLet bind $(coreApp xs ys)
Uhm, but you guys know that while (m a -> a) requires the proposed syntactic sugar, (a -> m a) is easy? r = return elevateM f x1 = join $ liftM f x1 elevateM3 f x1 x2 x3 = join $ liftM3 f x1 x2 x3 do { ...; elevateM3 f a (r$ b) c; ...} elevateM3 f (elevateM g a) (r$ b) (r$ c) coreLet x y = liftM2 CoreLet x y >>= f g (CoreApp (CoreLet bind xs) ys) = coreLet (r$ bind) (coreApp xs ys) In other words, you can avoid creating special yes_no_yes wrappers by creating a yes_yes_yes wrapper and turning a no into a yes here and there. No need for turning yes into no. One could even use left-associative infix operators ($@) :: (a -> b) -> a -> b ($@@) :: Monad m => (m a -> b) -> a -> b ($@) = id ($@@) = id . return and currying elevateM3 f $@@ (elevateM g $@@ a) $@ b $@ c g (CoreApp (CoreLet bind xs) ys) = coreLet $@ bind $@@ coreApp xs ys The intention is that a (mixed!) sequence of operators should parse as f $@ x1 $@@ x2 $@ x3 = ((f $@ x1) $@@ x2) $@ x3 Leaving such games aside, the fact that yes_yes_yes-wrappers subsumes the others is a hint that types like NameSupply Expr -> NameSupply Expr -> NameSupply Expr are fundamental. In other words, the right type for expressions is probably not Expr but NameSupply Expr with the interpretation that the latter represents expressions with "holes" where the concrete names for variables are filled in. The crucial point is that holes may be _shared_, i.e. supplying free variable names will fill several holes with the same name. Put differently, the question is: how to share names without giving concrete names too early? I think it's exactly the same question as How to make sharing observable? This is a problem that haunts many people and probably every DSL-embedder (Lava for Hardware, Pan for Images, Henning Thielemann's work on sound synthesis, Frisby for parser combinators). In a sense, writing a Haskell compiler is similar to embedding a DSL. I have no practical experiences with the name-supply problem. So, the first question is: can the name-supply problem indeed be solved by some form of observable sharing? Having a concrete toy-language showing common patterns of the name-supply problem would be ideal for that. The second task would be to solve the observable sharing problem, _that_ would require some syntactic sugar. Currently, one can use MonadFix to "solve" it. Let's take parser combinators as an example. The left-recursive grammar digit -> 0 | .. | 9 number -> number' digit number' -> ε | number can be represented by something like mdo digit <- newRule $ foldr1 (|||) [0...9] number <- newRule $ number' &&& digit number' <- newRule $ empty ||| number This way, we can observe the sharing and break the left recursion. But of course, the monad is nothing more than syntactic sugar here, the order does not matter at all. What we really want to write is a custom let-expression let' digit = foldr1 (|||) [0..9] number = number' &&& digit number' = empty ||| number and still be able to observe sharing. SPJ:
I'm thinking of this as a very superficial piece of syntactic sugar, aimed at avoiding the excessive linearization of monadic code. Nothing deep.
I don't agree, the excessive linearization is a feature, not a bug. Even if the sugar would be nothing deep, that shouldn't stop us from thinking deeply about it :) Regards, apfelmus

apfelmus wrote:
I see, the dreaded name-supply problem. Well, it just seems that monads are not quite the right abstraction for that one, right? (Despite that monads make up a good implementation). In other words, my opinion is that it's not the monadic code that is over-linearized but the code that is over-monadized.
The main property of a "monad" for name-supply is of course
f >> g = g >> f
modulo alpha-conversion. Although we have to specify an order, it's completely immaterial. There _has_ to be a better abstraction than "monad" to capture this!
I agree completely! It would be nice if the compiler could choose any order (or none at all, depending on implementation?) at its discretion. If serialization(where the gaps are filled with actual strings as names) produces different results depending on the order (similar to name-supply *monad*: not(f >> g = g >> f) in a too-significant way), we have a purity violation if the order is not well-defined. Big problem. So we need to make sure they are used in an abstracted enough manner - perhaps only an instance of Eq, to make sharing/uniqueness/identity detectable, no more. In dependently-typed languages I think we could have data structures that were fast but provably didn't depend in their operation on the material of ordering, for example, for lookup. Association-lists only need Eq but can be a little slow... So with this technique in Haskell, Frisby for example would examine the infinite tree starting at the returned root, and choose an order for internal use based on the shape of the tree (which represents a *cyclic* graph) -- it would be unable to use ordering provided by name-supply sequencing(monad). Which is just fine for it. (except for being O((number of rules)^2) to construct a parser, using association lists, I think.) Further abstraction could be added with a primitive UniqueNameMap of sorts, similar to (Map UniqueName a)... not enjoyable, so it might manage to be implemented in terms of some unsafe operations :-/. I hope my pessimism here is proved wrong :) Isaac

Hi
Put differently, I don't see a compelling use-case for the proposed syntax extension. But I've seen many misused monads.
A compelling use-case: http://darcs.haskell.org/yhc/src/libraries/core/Yhc/Core/Simplify.hs Look at coreSimplifyExprUniqueExt And from that file: -- helpers, ' is yes, _ is no coreCase__ x y = f $ CoreCase x y ; coreCase_' x y = f . CoreCase x =<< y coreLet__ x y = f $ CoreLet x y ; coreLet_' x y = f . CoreLet x =<< y coreLam__ x y = f $ CoreLam x y ; coreLam_' x y = f . CoreLam x =<< y coreApp__ x y = f $ CoreApp x y ; coreApp'_ x y = f . flip CoreApp y =<< x i.e. i've manually defined ' and _ variants to thread monadic effects through in quite horrible ways. The monad in question simply supplies free variables, so could be applied in any order. I think with this extension I can define: coreCase x y = f $ CoreCase x y coreLet x y = f $ CoreLet x y ... And taking just one rule, before: f (CoreApp (CoreLet bind xs) ys) = coreLet_' bind (coreApp__ xs ys) After: f (CoreApp (CoreLet bind xs) ys) = coreLet bind $(coreApp xs ys) Much nicer! This extension seems like a great idea - my only concern would be about the order of computations. Clearly left-to-right makes sense, but this may break some natural intuition in Haskell: flip f a b == f b a flip f $(a) $(b) /= f $(b) $(a) I don't think that is a show stopper though. Thanks Neil

Hello Neil, Thursday, July 12, 2007, 3:10:10 PM, you wrote:
This extension seems like a great idea - my only concern would be about the order of computations. Clearly left-to-right makes sense, but this may break some natural intuition in Haskell:
i think that undefined order will be a best one -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hi
This extension seems like a great idea - my only concern would be about the order of computations. Clearly left-to-right makes sense, but this may break some natural intuition in Haskell:
i think that undefined order will be a best one
Using "undefined" does not make for great reading in a standard! You just know that Hugs will pick right to left, Yhc will pick left to right, and GHC will offer a flag to choose between them ;) We have to pick, and there is only one logical choice - left to right. Thanks Neil

Put differently, I don't see a compelling use-case for the proposed syntax extension. But I've seen many misused monads.
A compelling use-case:
http://darcs.haskell.org/yhc/src/libraries/core/Yhc/Core/Simplify.hs
Look at coreSimplifyExprUniqueExt -- helpers, ' is yes, _ is no coreCase__ x y = f $ CoreCase x y coreCase_' x y = f . CoreCase x =<< y
hmm. i'd say that is a compelling misuse-case!-) although apfelmus probably meant misuses in the sense that a different structure than monads would often have been a better fit, i just mean readability. why not simply define coreCaseM x y = f =<< liftM2 CoreCase x y etc. then this
f (CoreApp (CoreLet bind xs) ys) = coreLet_' bind (coreApp__ xs ys)
would become somewhat lengthier, but much easier to read f (CoreApp (CoreLet bind xs) ys) = coreLetM (return bind) (coreAppM (return xs) (return ys)) in particular, there aren't 2^n variations of the functions, but simple return-wrappers around parameters that immediately tell me what is going on. if anything, i'd often like a quieter/shorter way to write (return x) - since this pattern usually requires parentheses, some form of semantic brackets would do nicely, to express lifting of pure values. that would serve the same overall purpose, without semantic ambiguities, wouldn't it? btw, this half-implicit recursion via an f embedded in constructors looks rather odd to me. why not separate rules and recursion? claus

Hope you don't mind my butting in. If you're looking for a "compelling use case" to make programming with monads more natural in Haskell, I'd say STM makes for a good one. There is no question there as to whether a monad is the right way to do STM; it is required. In working on some code recently that uses STM rather heavily, what I've found is that there are a couple things that make the experience somewhat painful despite the general promise of the technique. The most important is fixed by Simon's proposal for monad splices. I'd literally jump for joy if something like this were included in a future version of Haskell! Frankly, I don't think anyone will be convinced to use a more functional style by making programming in the STM monad more painful to do in Haskell. Instead, they will be convinced to be more hesitant about using Haskell for concurrent programming. -- Chris Smith

Hello,
I find the naming of values that is introduced by the "do" notation
useful and I am not at all convinced that the extra sugar that is
being proposed here makes the language simpler. It seems to me that
the only way to know that a piece of code is safe would be to:
i) do the translation in your head to convince yourself that the
effects will be executed in the right order,
ii) make sure that you are using a commutative monad, or the order of
the effects is not important.
I like the current status quo because:
i) for values I can apply the usual pure reasoning that I am used to in Haskell,
ii) this makes it easier to refactor code, at least for me (e.g., it
is easy to insert a 'seq' here and there to control evaluation if I
have to)
iii) I find that it is easier to understand code that is a bit more
explicit because I have to keep less translations in my head
iv) I can use "fmap" and "ap" (and friends, e.g., like what Connor
suggested) to achieve a style that is similar to the imperative one,
when I think that the explicit naming is clumsy.
-Iavor
PS: Someone suggested that this syntactic sugar might be useful in the
context of STM or concurrent programming but I am skeptical about that
example because in that setting the order of effects is very
important... I could be convinced with examples though :-)
On 7/15/07, Chris Smith
Hope you don't mind my butting in.
If you're looking for a "compelling use case" to make programming with monads more natural in Haskell, I'd say STM makes for a good one. There is no question there as to whether a monad is the right way to do STM; it is required.
In working on some code recently that uses STM rather heavily, what I've found is that there are a couple things that make the experience somewhat painful despite the general promise of the technique. The most important is fixed by Simon's proposal for monad splices. I'd literally jump for joy if something like this were included in a future version of Haskell!
Frankly, I don't think anyone will be convinced to use a more functional style by making programming in the STM monad more painful to do in Haskell. Instead, they will be convinced to be more hesitant about using Haskell for concurrent programming.
-- Chris Smith
_______________________________________________ Haskell-prime mailing list Haskell-prime@haskell.org http://www.haskell.org/mailman/listinfo/haskell-prime
participants (9)
-
Adde
-
apfelmus
-
Bulat Ziganshin
-
Chris Smith
-
Claus Reinke
-
Iavor Diatchki
-
Isaac Dupree
-
Neil Mitchell
-
Simon Peyton-Jones