a regressive view of support for imperative programming in Haskell

All of the recent talk of support for imperative programming in Haskell makes me really nervous. To be honest, I've always been a bit uncomfortable even with monad syntax. Instead of: do x <- cmd1 y <- cmd2 ... return e I was always perfectly happy with: cmd1 >>= \x-> cmd2 >>= \y-> ... return e Functions are in my comfort zone; syntax that hides them takes me out of my comfort zone. In my opinion one of the key principles in the design of Haskell has been the insistence on purity. It is arguably what led the Haskell designers to "discover" the monadic solution to IO, and is more generally what inspired many researchers to "discover" purely functional solutions to many seemingly imperative problems. With references and mutable data structures and IO and who-knows-what-else to support the Imperative Way, this discovery process becomes stunted. Well, you could argue, monad syntax is what really made Haskell become more accepted by the masses, and you may be right (although perhaps Simon's extraordinary performance at OSCOM is more of what we need). On the other hand, if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...". I hope I don't start a flame war with this post -- I'm just expressing my opinion, which admittedly is probably regressive rather than progressive :-). -Paul -- Professor Paul Hudak Department of Computer Science Office: (203) 432-1235 Yale University FAX: (203) 432-0593 P.O. Box 208285 email: paul.hudak@yale.edu New Haven, CT 06520-8285 WWW: www.cs.yale.edu/~hudak

IMHO and being a newbie having 20 years of professional C/C++/C# experience but hardly any Haskell experience, I agree with this. I find the monad syntax very confusing, because it looks so much like imperative code, but it isn't. Personally I also liked the Concurrent Clean approach, although this also introduced extra syntax for the compiler, while 'cmd1 >>= \x.' does not. You have to type more, but you see much clearer what is going on. Peter PS: It would be very nice for beginners to have a special tool / text editor that allows you see the desugared form of monads and other constructs. From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Paul Hudak Sent: Wednesday, August 08, 2007 8:21 PM To: haskell-cafe@haskell.org Cc: paul.hudak@yale.edu Subject: [Haskell-cafe] a regressive view of support for imperative programming in Haskell All of the recent talk of support for imperative programming in Haskell makes me really nervous. To be honest, I've always been a bit uncomfortable even with monad syntax. Instead of: do x <- cmd1 y <- cmd2 ... return e I was always perfectly happy with: cmd1 >>= \x-> cmd2 >>= \y-> ... return e Functions are in my comfort zone; syntax that hides them takes me out of my comfort zone. In my opinion one of the key principles in the design of Haskell has been the insistence on purity. It is arguably what led the Haskell designers to "discover" the monadic solution to IO, and is more generally what inspired many researchers to "discover" purely functional solutions to many seemingly imperative problems. With references and mutable data structures and IO and who-knows-what-else to support the Imperative Way, this discovery process becomes stunted. Well, you could argue, monad syntax is what really made Haskell become more accepted by the masses, and you may be right (although perhaps Simon's extraordinary performance at OSCOM is more of what we need). On the other hand, if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...". I hope I don't start a flame war with this post -- I'm just expressing my opinion, which admittedly is probably regressive rather than progressive :-). -Paul -- Professor Paul Hudak Department of Computer Science Office: (203) 432-1235 Yale University FAX: (203) 432-0593 P.O. Box 208285 email: paul.hudak@yale.edu New Haven, CT 06520-8285 WWW: www.cs.yale.edu/~hudak

On Wed, 8 Aug 2007, peterv wrote:
PS: It would be very nice for beginners to have a special tool / text editor that allows you see the desugared form of monads and other constructs.
An editor that can be configured to display various inferred details, annotations and desugarings in the middle of the source would be useful for all kinds of purposes, and certainly not just to newbies! For example, it could be used to show what information the type checker had inferred before hitting an error... -- flippa@flippac.org "The reason for this is simple yet profound. Equations of the form x = x are completely useless. All interesting equations are of the form x = y." -- John C. Baez

On 8/8/07, Philippa Cowderoy
On Wed, 8 Aug 2007, peterv wrote: {... An editor that can be configured to display various inferred details, annotations and desugarings in the middle of the source would be useful for all kinds of purposes, and certainly not just to newbies!
...} I couldn't agree more.. that would be a godsend to have that .. One would think that the code to translate has to be in the compiler code, though I doubt that it does so to a readable format.. oh well. gene

On 8/9/07, peterv
IMHO and being a newbie having 20 years of professional C/C++/C# experience but hardly any Haskell experience, I agree with this… I find the monad syntax very confusing, because it looks so much like imperative code, but it isn't. Personally I also liked the Concurrent Clean approach, although this also introduced extra syntax for the compiler, while 'cmd1 >>= \x…' does not. You have to type more, but you see much clearer what is going on.
Yeah, I kind of agree too. The only way I figured out sortof how to use Monads was to write everything out in >>= syntax. It was longer and uglier, but it made more sense. That said, I sortof see Haskell as a prototype language, whose good points will be added into other languages. Every program needs to have a prototype, and Haskell is that. So, whilst I'm tempted to add: an easy language needs to have only a single way of doing anything, so throwing away the "do" syntax makes the language easier by reducing the number of things to learn, actually for a prototype language, the rule is probably "anything goes", and then the best ideas get added to the non-prototype language later on.

I can't agree with your point about Haskell being (just) a prototype language (assuming that's what you meant). If that's the case, it won't last very long. Languages need to be something you can write real, practical applications in. Fortunately, Haskell isn't just a prototype language. I'm running a Haskell program (xmonad) every minute I'm working on my computer, and it's better than the C program (ion) that it replaced (with a code base about 1/40th the size). I'm sure Haskell isn't suitable for all application domains yet, but there's plenty of domains in which it can make its mark, and the frontier is going to keep getting pushed back. Mike Hugh Perkins wrote:
On 8/9/07, *peterv*
mailto:bf3@telenet.be> wrote: IMHO and being a newbie having 20 years of professional C/C++/C# experience but hardly any Haskell experience, I agree with this… I find the monad syntax very confusing, because it looks so much like imperative code, but it isn't. Personally I also liked the Concurrent Clean approach, although this also introduced extra syntax for the compiler, while 'cmd1 >>= \x…' does not. You have to type more, but you see much clearer what is going on.
Yeah, I kind of agree too. The only way I figured out sortof how to use Monads was to write everything out in >>= syntax. It was longer and uglier, but it made more sense.
That said, I sortof see Haskell as a prototype language, whose good points will be added into other languages. Every program needs to have a prototype, and Haskell is that.
So, whilst I'm tempted to add: an easy language needs to have only a single way of doing anything, so throwing away the "do" syntax makes the language easier by reducing the number of things to learn, actually for a prototype language, the rule is probably "anything goes", and then the best ideas get added to the non-prototype language later on.
------------------------------------------------------------------------
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

bf3:
IMHO and being a newbie having 20 years of professional C/C++/C# experience but hardly any Haskell experience, I agree with this... I find the monad syntax very confusing, because it looks so much like imperative code, but it isn't. Personally I also liked the Concurrent Clean approach, although this also introduced extra syntax for the compiler, while `cmd1 >>= \x...' does not. You have to type more, but you see much clearer what is going on.
Peter
PS: It would be very nice for beginners to have a special tool / text editor that allows you see the desugared form of monads and other constructs...
Drop by #haskell and use lambdabot: 11:21 dons> @undo do x <- getChar ; putChar (toUpper x) 11:21 lambdabot> getChar >>= \ x -> putChar (toUpper x) -- Don

On Wed, 8 Aug 2007, Paul Hudak wrote: ...
Well, you could argue, monad syntax is what really made Haskell become more accepted by the masses, and you may be right (although perhaps Simon's extraordinary performance at OSCOM is more of what we need). On the other hand, if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...".
It seems to me that Brian Hulley threw the glove down hard. Does pure functional Haskell offer a better way to write a GUI? I love the functional stuff myself, but if real applications depend on extensive imperative logic, we're best served by a language that cheerfully embraces the inevitable and handles it well. Monads, the do syntax, whatever it takes (I have a soft spot for O'Haskell, but alas I must be nearly alone on that.) Hopefully, it's still better, and not at all irreconcilable with the Functional Way. Donn Cave, donn@drizzle.com (That's a genuine question, by the way - my attempt to build a current Haskell GUI library on NetBSD foundered and I have no experience with Haskell GUI coding, but it's on the list of things I would like to look at. So if there's one that really illustrates the virtues of pure functional Haskell programming, that would be a welcome tip!)

Donn Cave wrote:
(I have a soft spot for O'Haskell, but alas I must be nearly alone on that.)
You are /not/ alone :-) I always found it very sad that O'Haskell and also its sucessor Timber (with all the good real-time stuff added) died the 'quick death' of most research languages. Cheers Ben

On 8/9/07, Benjamin Franksen
Donn Cave wrote:
(I have a soft spot for O'Haskell, but alas I must be nearly alone on that.)
You are /not/ alone :-) I always found it very sad that O'Haskell and also its sucessor Timber (with all the good real-time stuff added) died the 'quick death' of most research languages.
There is also RHaskell, which implements an O'Haskell-like system as a Haskell library. http://www.informatik.uni-freiburg.de/~wehr/haskell/

David Menendez wrote:
On 8/9/07, Benjamin Franksen
wrote: Donn Cave wrote:
(I have a soft spot for O'Haskell, but alas I must be nearly alone on that.)
You are /not/ alone :-) I always found it very sad that O'Haskell and also its sucessor Timber (with all the good real-time stuff added) died the 'quick death' of most research languages.
There is also RHaskell, which implements an O'Haskell-like system as a Haskell library.
Thanks for the pointer, I didn't know about this. Will take a look. Cheers Ben

On 8/9/07, Benjamin Franksen
David Menendez wrote:
There is also RHaskell, which implements an O'Haskell-like system as a Haskell library.
Thanks for the pointer, I didn't know about this. Will take a look.
Perhaps a wiki page is in order. Reactive objects are an appealing way to organize programs, but there isn't much information on-line about people's experience with them.

On Wed, Aug 08, 2007 at 02:20:39PM -0400, Paul Hudak wrote:
All of the recent talk of support for imperative programming in Haskell makes me really nervous. To be honest, I've always been a bit uncomfortable even with monad syntax. Instead of:
do x <- cmd1 y <- cmd2 ... return e
I was always perfectly happy with:
cmd1 >>= \x-> cmd2 >>= \y-> ... return e
I may be stating the obvious here, but I strongly prefer the do syntax. It's nice to know the other also, but the combination of do+indenting makes complicated code much clearer than the nested parentheses that would be required with purely >>= syntax.
Well, you could argue, monad syntax is what really made Haskell become more accepted by the masses, and you may be right (although perhaps Simon's extraordinary performance at OSCOM is more of what we need). On the other hand, if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...".
(I suppose I just did as you suggested I could...) I'd say that monadic programming is *not* the same as C++ programming. "Normal" programming in a Haskell monad (e.g. parsing, Maybe, IO) does not involve IORefs, and generally behaves quite nicely. Several times since reading the beginning of this discussion I've wished I had the new syntax so I could write something like: do if predicateOnFileContents (<- readFile "foo") then ... instead of either do contents <- readFile "foo" if predicateOnFileContents contents then ... or (as you'd prefer) readFile "foo" >>= \contents -> if predicateOnFileContents contents then ... As long as the sugar has a pretty obvious desugaring (which I seem to recall it did), I don't see how it's likely to make things worse. And there are an awful lot of situations where code would be reduced by a line, and (much more importantly) by a temporary variable. Every temporary variable declared adds that much syntactic overhead to a function, since the reader is forced to scan through the rest of the function to see if it is ever used again before he can understand what is being done and how it's being used. -- David Roundy Department of Physics Oregon State University

On 9 Aug 2007, at 8:41 am, David Roundy wrote:
I may be stating the obvious here, but I strongly prefer the do syntax. It's nice to know the other also, but the combination of do +indenting makes complicated code much clearer than the nested parentheses that would be required with purely >>= syntax.
Er, what nested parentheses would those be? do e => e do e => e >> rest rest' do p <- e => e >>= \p -> rest rest' do let d => let d in rest rest' We get extra >>, >>=, \, ->, and "in" tokens, but no new parentheses. The example in the Report makes this clear: do putStr "x: " l <- getLine return (words l) desugars to putStr "x: " >> getLine >>= \l -> return (words l) with no extra parentheses. [Not that this actually works in GHC 6; the implied flush isn't done.]

On 8/9/07, ok
We get extra >>, >>=, \, ->, and "in" tokens, but no new parentheses.
Yes exactly. It's the >>= and >> that gets rid of the parentheses, and reverses the order of the operations. I cant remember where I saw this, but somewhere there is a monad tutorial that starts really from the basics, which is that a "do" list is essentially something like: f (g (h initialvalue ) ) ... which we can rewrite as something like: h initialvalue |> g |> f .. for a suitable definition of |> , something like (off the top of my head, almost certainly wrong): (|>) f g = g f

On Thu, Aug 09, 2007 at 04:02:05PM +1200, ok wrote:
On 9 Aug 2007, at 8:41 am, David Roundy wrote:
I may be stating the obvious here, but I strongly prefer the do syntax. It's nice to know the other also, but the combination of do +indenting makes complicated code much clearer than the nested parentheses that would be required with purely >>= syntax.
Er, what nested parentheses would those be?
do x1 <- e1 if x1 then do x2 <- e2 xx <- if x2 then e3 else do x4 <- e4 x5 <- e5 e6 x4 x5 e7 xx x1 else do x8 <- e8 x9 <- e9 e10 x8 x9 x1 x11 would become something like e1 >>= \x1 -> if x1 then e2 >>= \x2 -> if x2 then e3 else e4 >>= \x4 -> e5 >>= \x5 -> e6 x4 x5 >>= (flip e7) x1 else e8 >>= \x8 -> e9 >>= \x9 -> e10 x8 x9 x1 >> x11 except that you'd have to figure out where to add parentheses. I'm sure I'd end up writing extra parentheses, but if you put in the minimal number of parentheses, then I doubt I'd be able to read the code. If you only consider the case of trivial code, then you're right, there are no extra parentheses required. This is the beauty of the do notation, it allows one to write actual real complicated monadic code in a form that is actually comprehensible. -- David Roundy Department of Physics Oregon State University

On 10 Aug 2007, at 6:42 am, David Roundy wrote:
do x1 <- e1 if x1 then do x2 <- e2 xx <- if x2 then e3 else do x4 <- e4 x5 <- e5 e6 x4 x5 e7 xx x1 else do x8 <- e8 x9 <- e9 e10 x8 x9 x1 x11
Granted. If you desugar nested dos then you need extra parentheses. (Basically, the invisible curly braces turn visible as parentheses.) But then, I don't regard this example as readable, and in true "lots of little functions" style would name the steps. I especially dislike the irregular indentation one gets with do/if/do. Anyone remember when Haskell extended list comprehension syntax to monads? Just as I was about to get my head around it, it went away.
This is the beauty of the do notation, it allows one to write actual real complicated monadic code in a form that is actually comprehensible.
It seems we are now in complete agreement except for "comprehensible".

David Roundy wrote:
On Wed, Aug 08, 2007 at 02:20:39PM -0400, Paul Hudak wrote: As long as the sugar has a pretty obvious desugaring (which I seem to recall it did), I don't see how it's likely to make things worse. And
Some people are arguing that the desugaring isn't obvious. Although I like the proposal to start with, I am beginning to be convinced by those arguments. For example:
do foo x
can be simplified to
foo x
under the new proposals
do x <- bar y foo x
would shorten to
do foo (<- bar y)
and now you really really want to remove the do, to get simply
foo (<- bar y)
but that would be illegal. The new sugar is going to remove all kinds of substitution and simplification lemmas that we have got used to. There is also the fact that if : foo x = bar x x then you call foo monadically as in do foo (<- baz) You can no longer "replace foo with its definition", because if replace that with do bar (<- baz) (<- baz) ...that means something rather different :( A third example is with nested dos: do x <- bar y baz something $ do foo x is not the same as do baz something $ do foo (<- bar y) Jules

On Thu, Aug 09, 2007 at 02:08:20PM +0100, Jules Bean wrote:
David Roundy wrote:
On Wed, Aug 08, 2007 at 02:20:39PM -0400, Paul Hudak wrote: As long as the sugar has a pretty obvious desugaring (which I seem to recall it did), I don't see how it's likely to make things worse. And
Some people are arguing that the desugaring isn't obvious.
That's a reasonable objection (although I disagree).
Although I like the proposal to start with, I am beginning to be convinced by those arguments.
For example:
do foo x
can be simplified to
foo x
under the new proposals
do x <- bar y foo x
would shorten to
do foo (<- bar y)
and now you really really want to remove the do, to get simply
foo (<- bar y)
but that would be illegal. The new sugar is going to remove all kinds of substitution and simplification lemmas that we have got used to.
I guess I'd just have to argue that like the <- notation, the (<- ) notation is *part* of the do notation. Just as you can't pull a <- out of a do loop and expect it to behave identically, you can't do the same with a (<- ). To me, the similarity with existing do-dependent syntax (and it helps that except for pattern guards, <- is *only* used within a do block.
There is also the fact that if :
foo x = bar x x
then you call foo monadically as in
do foo (<- baz)
You can no longer "replace foo with its definition", because if replace that with
do bar (<- baz) (<- baz)
...that means something rather different :(
Again, this seems obvious, and it doesn't seem like "replace foo with its definition" is something I think of.
A third example is with nested dos:
do x <- bar y baz something $ do foo x
is not the same as
do baz something $ do foo (<- bar y)
Again, it all comes down to whether the "find the nearest do" is obvious. It seems pretty obvious to me. And I like the idea of someone just implementing this, and then those of us to whom it appeals can try it. I've longed for something like this (mostly for monadic ifs and cases) for quite a while now... -- David Roundy Department of Physics Oregon State University

On Thu, Aug 09, 2007 at 11:52:17AM -0700, David Roundy wrote:
On Thu, Aug 09, 2007 at 02:08:20PM +0100, Jules Bean wrote:
*snip*
A third example is with nested dos:
do x <- bar y baz something $ do foo x
is not the same as
do baz something $ do foo (<- bar y)
Again, it all comes down to whether the "find the nearest do" is obvious. It seems pretty obvious to me. And I like the idea of someone just implementing this, and then those of us to whom it appeals can try it. I've longed for something like this (mostly for monadic ifs and cases) for quite a while now...
Funny, I've been longing for the monadic case (and if) for quite a while. A mondic case is simple, it's handy, and you don't have to worry about lots of interactions caseM e of alts ==> e >>= \x -> case x of alts I'm convinced this would be plenty useful on its own, and also that trying to design any more comprehensive syntax quickly gets really tricky. The basic problem seems to be that functions can expect either monadic or pure arguments, and return pure or monadic values, so there are at least three possible conversion you might want at each application (considering pure<->pure and monadic<->monadic the same). Defaulting to "make things work" requires type information, and doesn't seem nearly so simple if you consider that programmers might actually want to pass around actions of the monad they are running in as values (Setting GUI callbacks, using [] for String processing, etc). Actually, deciding which tranformation gets juxtaposition and how to recurse into subterms seems to give a design space that might have reasonable solutions. More on that in a latter message.
There is also the fact that if : foo x = bar x x
then you call foo monadically as in
do foo (<- baz)
You can no longer "replace foo with its definition", because if replace that with
do bar (<- baz) (<- baz)
...that means something rather different :(
Again, this seems obvious, and it doesn't seem like "replace foo with its definition" is something I think of.
One of the great things about haskell is how completely naive you can be when you "replace foo with its definition", and still do valid equational reasoning. It would be sad if substituting a parenthesized subterm of something that looked like an expression wasn't valid. (expanding a definition can change sharing, but at least it's denotationally equivalent). The only slightly tricky things now are remembering that x <- exp does not define x to be exp, and what to expand a class method to. I think I'd be happier if there was some bracketing around the expression to be transformed, to warn you to again be cautious and fearful about transforming your code. Brandon

David Roundy wrote:
Several times since reading the beginning of this discussion I've wished I had the new syntax so I could write something like:
do if predicateOnFileContents (<- readFile "foo") then ...
instead of either
do contents <- readFile "foo" if predicateOnFileContents contents then ...
or (as you'd prefer)
readFile "foo" >>= \contents -> if predicateOnFileContents contents then ...
Isn't this problem, namely being forced to name intermediate results, also solved by some sort of idiom bracket sugar, maybe together with the lambda case proposal? I would prefer both very much to the proposed (<- action) syntax for the same reasons that e.g. Jules Bean nicely summarized. Cheers Ben

On Thu, Aug 09, 2007 at 08:45:14PM +0200, Benjamin Franksen wrote:
David Roundy wrote:
Several times since reading the beginning of this discussion I've wished I had the new syntax so I could write something like:
do if predicateOnFileContents (<- readFile "foo") then ...
instead of either
do contents <- readFile "foo" if predicateOnFileContents contents then ...
or (as you'd prefer)
readFile "foo" >>= \contents -> if predicateOnFileContents contents then ...
Isn't this problem, namely being forced to name intermediate results, also solved by some sort of idiom bracket sugar, maybe together with the lambda case proposal? I would prefer both very much to the proposed (<- action) syntax for the same reasons that e.g. Jules Bean nicely summarized.
I'm not familiar with the lambda case proposal, and don't know what you mean by idiom bracket sugar, but I haven't had an idea (or heard of one) that was nearly so elegant as the (<- action) proposal, which neatly allows one to lift any existing pure function or syntactic construct (except lambda expressions?) into a monad. i.e. we don't need to define a separate 'if', 'case', etc, and we don't need liftM, liftM2, liftM3, liftM4andahalf, all of which are subsumed by a single pretty syntax. The only cost is that this syntax relies on the do notation, and thus makes the desugaring of that do notation slightly more complicated when used. -- David Roundy Department of Physics Oregon State University

David Roundy wrote:
On Thu, Aug 09, 2007 at 08:45:14PM +0200, Benjamin Franksen wrote:
David Roundy wrote:
Several times since reading the beginning of this discussion I've wished I had the new syntax so I could write something like:
do if predicateOnFileContents (<- readFile "foo") then ...
instead of either
do contents <- readFile "foo" if predicateOnFileContents contents then ...
or (as you'd prefer)
readFile "foo" >>= \contents -> if predicateOnFileContents contents then ...
Isn't this problem, namely being forced to name intermediate results, also solved by some sort of idiom bracket sugar, maybe together with the lambda case proposal? I would prefer both very much to the proposed (<- action) syntax for the same reasons that e.g. Jules Bean nicely summarized.
I'm not familiar with the lambda case proposal,
http://hackage.haskell.org/trac/haskell-prime/wiki/LambdaCase or, quoting from a recent post by Stefan O'Rear in this thread:
I think the CaseLambda proposal on the Haskell' wiki solves this one nicely.
mexpr >>= case of p1 -> branch1 p2 -> branch2
You still have to use >>=, but you don't have to name the scrutinee (and names are expensive cognitively).
i.e. your example would become fmap predicateOnFileContents (readFile "foo") >>= case of True -> ... False -> ... (use liftM instead of fmap, if you prefer)
and don't know what you mean by idiom bracket sugar,
As has been already mentioned in this thread, in http://www.soi.city.ac.uk/~ross/papers/Applicative.html Conor McBride and Ross Paterson invent/explain a new type class that is now part of the base package (Control.Applicative). They also use/propose syntactic sugar for it, i.e. pure f <*> u1 <*> ... <*> un ~~> (| f u1 ... un |) (I just made up the symbols '(|' and '|)', the concrete syntax would have to be fixed by people more knowledgeable than me.) As to the pros and cons of (<- action) proposal, I think everything has been said. I'd vote for giving IdiomBrackets and/or LambdaCase a chance for being implemented, too, so we can try and evaluate different ways to simplify monadic code. One reason why I like IdiomBrackets is that they are more generally applicable (no pun intended ;:) i.e. they would work not just for Monads but for anything in Applicative. (Of course, that is also their weakness.) Similary, LambdaCase has more applications than just simplifying monadic code by avoiding to name an intermediate result. Cheers Ben

There's been lots of interesting stuff on this thread. Does anyone feel up to summarizing it on a Wiki page, for others to polish? At least part of that page should comprise a compact specification of what the (<- ) proposal is; but there have been lots of other suggestions. Otherwise it'll all get submerged by next month's excitements. Simon | -----Original Message----- | From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe- | bounces@haskell.org] On Behalf Of Benjamin Franksen | Sent: 09 August 2007 21:31 | To: haskell-cafe@haskell.org | Subject: [Haskell-cafe] Re: Re: a regressive view of support for | imperative programming in Haskell | | David Roundy wrote: | > On Thu, Aug 09, 2007 at 08:45:14PM +0200, Benjamin Franksen wrote: | >> David Roundy wrote: | >> > Several times since reading the beginning of this discussion I've | wished I | >> > had the new syntax so I could write something like: | >> > | >> > do if predicateOnFileContents (<- readFile "foo") then ... | >> > | >> > instead of either | >> > | >> > do contents <- readFile "foo" | >> > if predicateOnFileContents contents then ... | >> > | >> > or (as you'd prefer) | >> > | >> > readFile "foo" >>= \contents -> | >> > if predicateOnFileContents contents then ... | >> | >> Isn't this problem, namely being forced to name intermediate | results, | also | >> solved by some sort of idiom bracket sugar, maybe together with the | lambda | >> case proposal? I would prefer both very much to the proposed (<- | action) | >> syntax for the same reasons that e.g. Jules Bean nicely summarized. | > | > I'm not familiar with the lambda case proposal, | | http://hackage.haskell.org/trac/haskell-prime/wiki/LambdaCase | | or, quoting from a recent post by Stefan O'Rear in this thread: | | > I think the CaseLambda proposal on the Haskell' wiki solves this one | > nicely. | > | > mexpr >>= case of | > p1 -> branch1 | > p2 -> branch2 | > | > You still have to use >>=, but you don't have to name the scrutinee | (and | > names are expensive cognitively). | | i.e. your example would become | | fmap predicateOnFileContents (readFile "foo") >>= case of | True -> ... | False -> ... | | (use liftM instead of fmap, if you prefer) | | > and don't know what you mean by idiom bracket sugar, | | As has been already mentioned in this thread, in | http://www.soi.city.ac.uk/~ross/papers/Applicative.html Conor McBride | and | Ross Paterson invent/explain a new type class that is now part of the | base | package (Control.Applicative). They also use/propose syntactic sugar | for | it, i.e. | | pure f <*> u1 <*> ... <*> un | | ~~> (| f u1 ... un |) | | (I just made up the symbols '(|' and '|)', the concrete syntax would | have to | be fixed by people more knowledgeable than me.) | | As to the pros and cons of (<- action) proposal, I think everything has | been | said. I'd vote for giving IdiomBrackets and/or LambdaCase a chance for | being implemented, too, so we can try and evaluate different ways to | simplify monadic code. | | One reason why I like IdiomBrackets is that they are more generally | applicable (no pun intended ;:) i.e. they would work not just for | Monads | but for anything in Applicative. (Of course, that is also their | weakness.) | Similary, LambdaCase has more applications than just simplifying | monadic | code by avoiding to name an intermediate result. | | Cheers | Ben | | _______________________________________________ | Haskell-Cafe mailing list | Haskell-Cafe@haskell.org | http://www.haskell.org/mailman/listinfo/haskell-cafe

Benjamin Franksen wrote:
As has been already mentioned in this thread, in http://www.soi.city.ac.uk/~ross/papers/Applicative.html Conor McBride and Ross Paterson invent/explain a new type class that is now part of the base package (Control.Applicative). They also use/propose syntactic sugar for it, i.e.
pure f <*> u1 <*> ... <*> un
~~> (| f u1 ... un |)
(I just made up the symbols '(|' and '|)', the concrete syntax would have to be fixed by people more knowledgeable than me.)
The problem with [| and |] lifted to monads that this only works for fully applied arguments, i.e. that handle :: IO Handle string :: IO String [| hPutStr handle string |] :: IO () works but [| hPutStr handle |] = join (return hPutStr `ap` handle) ^= join ((f :: m (a -> b -> m c)) `ap` (x :: m a)) = join ( y :: m (b -> m c)) is a type error. I think this is also what makes the (<- action) proposal so non-local and what is the source of this whole discussion. The core problem is: Functions like a -> b -> m c can't be partially applied to monadic actions like m a without specifying the number of arguments in advance. In other words, such functions aren't curried correctly. Clearly, LiftMn specifies the number of arguments. But _both_ the (<-) proposal and idiom brackets specify the number of arguments too! Namely by requiring that all arguments are fully applied. So, neither one is capable of partially applying the first argument without saturating the call, you can only write handle :: IO Handle -- define putStr in terms of the above hPutStr putStr :: String -> IO () putStr = \x -> [| hPutStr handle (return x) |] putStr = \x -> do { hPutStr (<- handle) x } One way to get currying for monads is to work with functions m a -> m b -> m c However, this type is larger than a -> b -> m c , i.e. the function from :: Monad m => (a -> b -> m c) -> (m a -> m b -> m c) from f ma mb = ma >>= \a -> mb >>= \b -> f a b is not surjective since we could perform the actions in a different order from2 f ma mb = mb >>= \b -> ma >>= \a -> f a b In other words, if someone gives you a value of type m a -> m b -> m c , then you can't convert it to a -> b -> m c and back without risking that you end up with a different result. But there is another type suitable for currying m (a -> m (b -> m c)) which I believe is in some way equivalent to a -> b -> m c from :: Monad m => (a -> b -> m c) -> m (a -> m (b -> m c)) from f = return $ \a -> return $ \b -> f a b to :: Monad m => m (a -> m (b -> m c)) -> (a -> b -> m c) to f a b = f >>= \g -> g a >>= \h -> h b but I'm not sure. My assumption is that we have an equivalence forall a,b . m (a -> m b) ~ (a -> m b) because any side effect executed by the extra m on the outside can well be delayed until we are supplied a value a. Well, at least when all arguments are fully applied, for some notion of "fully applied" Anyway, here's how to curry with that type: (@) :: Monad m => m (a -> m b) -> (m a -> m b) (@) f x = join (f `ap` x) hPutStr :: IO (Handle -> IO (String -> IO ())) handle :: IO Handle putStr :: IO (String -> IO ()) putStr = hPutStr @ handle With the infix type synonym type (~>) a b = a -> IO b we can also write hPutStr :: IO (Handle ~> String ~> () ) putStr :: IO (String ~> () ) This is of course the Kleisli-Arrow which explains why currying works. Regards, apfelmus

On Mon, Aug 13, 2007 at 04:35:12PM +0200, apfelmus wrote:
My assumption is that we have an equivalence
forall a,b . m (a -> m b) ~ (a -> m b)
because any side effect executed by the extra m on the outside can well be delayed until we are supplied a value a. Well, at least when all arguments are fully applied, for some notion of "fully applied"
(\a x -> a >>= ($ x)) ((\f -> return f) X) ==> (β) (\a x -> a >>= ($ x)) (return X) ==> (β) (\x -> (return X) >>= ($ x)) ==> (monad law) (\x -> ($ x) X) ==> (β on the sugar-hidden 'flip') (\x -> X x) ==> (η) X Up to subtle strictness bugs arising from my use of η :), you're safe. Stefan

Stefan O'Rear schrieb:
On Mon, Aug 13, 2007 at 04:35:12PM +0200, apfelmus wrote:
My assumption is that we have an equivalence
forall a,b . m (a -> m b) ~ (a -> m b)
because any side effect executed by the extra m on the outside can well be delayed until we are supplied a value a. Well, at least when all arguments are fully applied, for some notion of "fully applied"
(\a x -> a >>= ($ x)) ((\f -> return f) X) ==> (β) (\a x -> a >>= ($ x)) (return X) ==> (β) (\x -> (return X) >>= ($ x)) ==> (monad law) (\x -> ($ x) X) ==> (β on the sugar-hidden 'flip') (\x -> X x) ==> (η) X
Up to subtle strictness bugs arising from my use of η :), you're safe.
Yes, but that's only one direction :) The other one is the problem: return . (\f x -> f >>= ($ x)) =?= id Here's a counterexample f :: IO (a -> IO a) f = writeAHaskellProgram >> return return f' :: IO (a -> IO a) f' = return $ (\f x -> f >>= ($ x)) $ f ==> (β) return $ \x -> (writeAHaskellProgram >> return return) >>= ($ x) ==> (BIND) return $ \x -> writeAHaskellProgram >> (return return >>= ($ x)) ==> (LUNIT) return $ \x -> writeAHaskellProgram >> (($ x) return) ==> (β) return $ \x -> writeAHaskellProgram >> return x Those two are different, because clever = f >> return () = writeAHaskellProgram clever' = f' >> return () = return () are clearly different ;) Regards, apfelmus

On Mon, Aug 13, 2007 at 05:39:34PM +0200, apfelmus wrote:
Stefan O'Rear schrieb:
On Mon, Aug 13, 2007 at 04:35:12PM +0200, apfelmus wrote:
My assumption is that we have an equivalence
forall a,b . m (a -> m b) ~ (a -> m b)
because any side effect executed by the extra m on the outside can well be delayed until we are supplied a value a. Well, at least when all arguments are fully applied, for some notion of "fully applied" (\a x -> a >>= ($ x)) ((\f -> return f) X) ==> (β) (\a x -> a >>= ($ x)) (return X) ==> (β) (\x -> (return X) >>= ($ x)) ==> (monad law) (\x -> ($ x) X) ==> (β on the sugar-hidden 'flip') (\x -> X x) ==> (η) X Up to subtle strictness bugs arising from my use of η :), you're safe.
Yes, but that's only one direction :) The other one is the problem:
return . (\f x -> f >>= ($ x)) =?= id
Here's a counterexample
f :: IO (a -> IO a) f = writeAHaskellProgram >> return return
f' :: IO (a -> IO a) f' = return $ (\f x -> f >>= ($ x)) $ f ==> (β) return $ \x -> (writeAHaskellProgram >> return return) >>= ($ x) ==> (BIND) return $ \x -> writeAHaskellProgram >> (return return >>= ($ x)) ==> (LUNIT) return $ \x -> writeAHaskellProgram >> (($ x) return) ==> (β) return $ \x -> writeAHaskellProgram >> return x
Those two are different, because
clever = f >> return () = writeAHaskellProgram clever' = f' >> return () = return ()
are clearly different ;)
I figured that wouldn't be a problem since our values don't escape, and the functions we define all respect the embedding... More formally: Projections and injections: proj ma = \x -> ma >>= \ fn' -> fn' x inj fn = return fn Define an equivalence relation: ma ≡ mb <-> proj ma = proj mb Projection respects equivalence: ma ≡ mb -> proj ma = proj mb (intro ->) ma ≡ mb => proj ma = proj mb (equiv def) proj ma = proj mb => proj ma = proj mb (assumption) Application: (@) ma1 = \x -> join (proj ma1 x) Application respects equivalence: ma1 ≡ ma2 -> (@) ma1 = (@) ma2 (intro ->) ma1 ≡ ma2 => (@) ma1 = (@) ma2 (β) ma1 ≡ ma2 => (\x -> join (proj ma1 x)) = (\x -> join (proj ma2 x)) (extensionality) ma1 ≡ ma2 => join (proj ma1 x) = join (proj ma2 x) (application respects = left) ma1 ≡ ma2 => proj ma1 x = proj ma2 x (application respects = right) ma1 = ma2 => proj ma1 = proj ma2 (lemma) Stefan

apfelmus wrote:
My assumption is that we have an equivalence
forall a,b . m (a -> m b) ~ (a -> m b)
because any side effect executed by the extra m on the outside can well be delayed until we are supplied a value a. Well, at least when all arguments are fully applied, for some notion of "fully applied"
I figured that wouldn't be a problem since our values don't escape, and the functions we define all respect the embedding... More formally:
Projections and injections:
Stefan O'Rear wrote: proj :: Monad m => m (a -> m b) -> (a -> m b)
proj ma = \x -> ma >>= \fn' -> fn' x inj fn = return fn
Define an equivalence relation:
ma ≡ mb <-> proj ma = proj mb
Projection respects equivalence:
ma ≡ mb -> proj ma = proj mb (intro ->) ma ≡ mb => proj ma = proj mb (equiv def) proj ma = proj mb => proj ma = proj mb (assumption)
Application respects equivalence:
Yeah, that's the right approach, but it has a few typos. The correct version is (@) :: Monad m => m (a -> m b) -> m a -> m b (@) ma = \x -> x >>= proj ma Formulating (@) in terms of proj ma is a very clever idea since it follows immediately that ma @ x = ma' @ x iff proj ma = proj ma' iff ma ≡ ma' In other words, when viewed through @ and proj only, equivalent actions give equivalent results. The main point is that this does not hold for the other curry-friendly type m a -> m b proj :: Monad m => (m a -> m b) -> (a -> m b) proj f = f . return (@) :: Monad m => (m a -> m b) -> m a -> m b (@) = id ma ≡ ma' iff proj ma = proj ma' since those functions may execute their argument multiple times. So, here's the counterexample once :: Monad m => m A -> m A once = id twice :: Monad m => m A -> m A twice x = x >> once x Now, we have proj once = return = proj twice but effect :: IO () -- a given effect effect = putStrLn "Kilroy was here!" once @ effect = effect ≠ twice @ effect = effect >> effect The same can be done for several arguments, along the lines of proj2 :: m (a -> m (b -> m c)) -> (a -> b -> m c) proj2 f = proj . (proj f) app2 :: m (a -> m (b -> m c)) -> (m a -> m b -> m c) app2 f mx my = (f @ mx) @ my = my >>= proj (mx >>= proj f) = my >>= \y -> mx >>= \x -> proj2 f x y and similar results. Regards, apfelmus

David Roundy wrote:
The only cost is that this syntax relies on the do notation, and thus makes the desugaring of that do notation slightly more complicated when used.
If I understand correctly, do blah f (do foo bar (<- action) ) blah has an ambiguity: which do-block is the action bound in? I can easily imagine myself being frustrated at having to refactor my code if the defined answer is not the one I want at the moment. Isaac

On Mon, Aug 13, 2007 at 01:27:45PM -0300, Isaac Dupree wrote:
David Roundy wrote:
The only cost is that this syntax relies on the do notation, and thus makes the desugaring of that do notation slightly more complicated when used.
If I understand correctly,
do blah f (do foo bar (<- action) ) blah
has an ambiguity: which do-block is the action bound in? I can easily imagine myself being frustrated at having to refactor my code if the defined answer is not the one I want at the moment.
It doesn't have an ambiguity, because it's defined to be bound in the innermost do loop. This isn't a new concept, the <- syntax in the existing do notation has the same behavior. -- David Roundy Department of Physics Oregon State University

Paul Hudak wrote:
All of the recent talk of support for imperative programming in Haskell makes me really nervous....
... if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...". Perhaps I may have sounded unappreciative of all the hard work that's been done trying to find solutions to the problem of GUI programming in a pure functional style. I think the problem is that for the purposes of research, it is sufficient to show that a concept can be implemented but the speed of the resulting program is not that important compared to the essential ideas.
Thus the concept of a GUI as a time varying continuous function of events and response pictures (represented as functions:: Vector3 Float -> RGB) is tremendously appealing, and will surely become the standard way when machines get fast enough, but for the moment this nice pure functional way just doesn't seem directly applicable (;-)). I see the problem you're pointing to though - that the language could become caught in the middle trying to serve two rather different purposes namely a pure ground for research and a fast general purpose platform for creating programs now. As for me, the issue is just that after spending almost 2 years with Haskell trying to find/discover a purely functional solution to this problem that will be suitable for a practical high speed graphics application on standard hardware, being unsuccessful, and not having any funding to pursue this research further, my only option is to either use the imperative monadic style of Haskell programming or to use OCaml or C++, because I need to get something written right now that I can put on my website to sell... Personally I don't find the existing do notation all that burdensome - I've got used to it, and even though each action must be explicitly written, the ease with which higher order functions can be introduced to factor out commonality means that the do blocks are often shorter than the corresponding C++ code from my previous GUI. Furthermore, even though I'm using the imperative style, the use of Haskell has led me to surprisingly neat solutions to several problems that I didn't discover when I implemented the previous versions in C++ and C#, so I think there are still great benefits to be had from using Haskell or languages inspired by it. Best regards, Brian.

Hello Paul, Wednesday, August 8, 2007, 10:20:39 PM, you wrote:
we need). On the other hand, if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...".
well, are you ever programmed real world application? i need such design for using it in the cases when functional approach apply - and of course, functional approach will be much nicer in these cases. it is required for situations when i forced to write imperative program due to some environment limitations -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

For what it's worth from a Haskell newbie (and from someone who's been doing
FP since November, mainly in Scala.)
I really like Haskell's purity and having the clear separation between zero
side effects and monads is most excellent.
It was quite a brain change to program functionally. It took a lot of work
and a lot of discipline. In Scala, I set myself the goal of not having any
variables (except as instance variables of a very limited number of
classes... Scala doesn't support monads), but to only use single-assignment
values. At first, it was really hard to think in a new way. Now, I find
that, even when I write Java code, I write in a functional style.
The benefit for me is reducing the number of moving parts as well as forcing
me to do significantly more design work up front. Now, the OO ideal was to
model ones class hierarchy and the messages and the code will all flow
automagically. In my 17 years, dozens of commercial applications, and 1M+
LoC of OO programming, it's never worked that way for me.
On the other hand, programming in a state-minimized (or state-free) way
makes me work a lot more to define my types and how the types interact with
each other. I find that I'm spending a lot more time "up front" piecing the
types together. I am spending no time worrying about hidden state (gee, if
I call X before I call Y, the state will not be set up, so I have to
shoe-horn some sort of test to make sure that the state is set up
correctly.)
I also find that my code is shorter and less dense at the same time. The
"what" part of my code is easier to see because filter/map/zip constructs
are a lot less distracting than "new array/for/if/..." constructs.
The proof is in the output for me. My web framework (http://liftweb.net)
and the commercial product that my team is building with Scala (
http://www.circleshare.com) have been remarkably stable and low in bugs. And
the bugs have by and large been "logic" bugs rather than "changing X which
caused a bug in Z because the state was wrong" bugs. The code bases are
large enough, that I'd normally be expecting to see breakage from unexpected
side effects from code changes. That hasn't started happening.
Part of the challenge that Haskell and Scala and the other FP languages face
is the pain developers face as they change the way they approach and solve
problems. Based on my 28 years of professional coding, I think that FP is
the single best and single most important technology that I've invested my
time in. I think that Haskell's brand of purity is hyper-important and will
allow for assembly of significantly more complex systems than will any other
technology that I've seen.
Please, keep to the vision. The vision is powerful, inspiring, and I
believe correct.
Thanks,
David
On 8/8/07, Paul Hudak
All of the recent talk of support for imperative programming in Haskell makes me really nervous. To be honest, I've always been a bit uncomfortable even with monad syntax. Instead of:
do x <- cmd1 y <- cmd2 ... return e
I was always perfectly happy with:
cmd1 >>= \x-> cmd2 >>= \y-> ... return e
Functions are in my comfort zone; syntax that hides them takes me out of my comfort zone.
In my opinion one of the key principles in the design of Haskell has been the insistence on purity. It is arguably what led the Haskell designers to "discover" the monadic solution to IO, and is more generally what inspired many researchers to "discover" purely functional solutions to many seemingly imperative problems. With references and mutable data structures and IO and who-knows-what-else to support the Imperative Way, this discovery process becomes stunted.
Well, you could argue, monad syntax is what really made Haskell become more accepted by the masses, and you may be right (although perhaps Simon's extraordinary performance at OSCOM is more of what we need). On the other hand, if we give imperative programmers the tools to do all the things they are used to doing in C++, then we will be depriving them of the joys of programming in the Functional Way. How many times have we seen responses to newbie posts along the lines of, "That's how you'd do it in C++, but in Haskell here's a better way...".
I hope I don't start a flame war with this post -- I'm just expressing my opinion, which admittedly is probably regressive rather than progressive :-).
-Paul
-- Professor Paul Hudak Department of Computer Science Office: (203) 432-1235 Yale University FAX: (203) 432-0593 P.O. Box 208285 email: paul.hudak@yale.edu New Haven, CT 06520-8285 WWW: www.cs.yale.edu/~hudak http://www.cs.yale.edu/%7Ehudak
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- lift, the fast, powerful, easy web framework http://liftweb.net
participants (21)
-
apfelmus
-
Benjamin Franksen
-
Brandon Michael Moore
-
Brian Hulley
-
Bulat Ziganshin
-
David Menendez
-
David Pollak
-
David Roundy
-
Donn Cave
-
dons@cse.unsw.edu.au
-
Gene A
-
Hugh Perkins
-
Isaac Dupree
-
Jules Bean
-
Michael Vanier
-
ok
-
Paul Hudak
-
peterv
-
Philippa Cowderoy
-
Simon Peyton-Jones
-
Stefan O'Rear