
I've heard Simon (Peyton-Jones) twice now mention the desire to be able to embed a monadic subexpression into a monad. That would be http://article.gmane.org/gmane.comp.lang.haskell.prime/2267 and in the recent OSCON video. Is someone working on implementing this? If no, I'll take a crack at it. If yes, I'd be slower than someone else, since I'm new to GHC. This seems like something a tad easier than type system extensions and the like since it's just desugaring... but a little harder than my "remove the GHCi banner" patch! In other words, a perfect step for me. Also, I got so frustrated that I ended up abandoning some code recently because STM is, in the end, so darn hard to use as a result of this issue. I'd love to see this solved, and I'm quite eager to do it. Proposals for syntax I've seen include: $( expr ) -- conflicts with template haskell ( <- expr ) -- makes sense, and I think it's unambiguous Other ideas: ``expr`` -- back-ticks make sense for UNIX shell scripters (| expr |) -- I don't think anything uses this yet Thoughts? -- Chris Smith

Hi Chris, Simon mentioned this to me as a possible project when I started my internship here at MSR, so I'm pretty sure this is both on the wish-list and not already taken (but we should check with Simon to make sure). I've since wished for it a few times as I've been implementing view patterns, so I personally think it would be a great thing for you to implement! If you're interested in doing this, I'd be happy to give you an overview of what pieces of GHC you'll need to touch and to answer your questions (as best I can!) as you work on the implementation. I've gotten to know the front end of GHC a little over the past few weeks. Let me know, -Dan On Aug02, Chris Smith wrote:
I've heard Simon (Peyton-Jones) twice now mention the desire to be able to embed a monadic subexpression into a monad. That would be http://article.gmane.org/gmane.comp.lang.haskell.prime/2267 and in the recent OSCON video. Is someone working on implementing this?
If no, I'll take a crack at it. If yes, I'd be slower than someone else, since I'm new to GHC.
This seems like something a tad easier than type system extensions and the like since it's just desugaring... but a little harder than my "remove the GHCi banner" patch! In other words, a perfect step for me. Also, I got so frustrated that I ended up abandoning some code recently because STM is, in the end, so darn hard to use as a result of this issue. I'd love to see this solved, and I'm quite eager to do it.
Proposals for syntax I've seen include:
$( expr ) -- conflicts with template haskell ( <- expr ) -- makes sense, and I think it's unambiguous
Other ideas:
``expr`` -- back-ticks make sense for UNIX shell scripters (| expr |) -- I don't think anything uses this yet
Thoughts?

Hi Chris,
I've heard Simon (Peyton-Jones) twice now mention the desire to be able to embed a monadic subexpression into a monad.
I think this is a fantastic idea, please do so!
$( expr ) -- conflicts with template haskell ( <- expr ) -- makes sense, and I think it's unambiguous
Other ideas:
``expr`` -- back-ticks make sense for UNIX shell scripters (| expr |) -- I don't think anything uses this yet
This final (| one |) looks way too much like template haskell, it has the feel of template haskell, even if it isn't yet in the syntax. Your (<- proposal) feels a bit like an operator section - I'm not sure if that is a good thing or a bad thing, but for some reason feels slightly clunky and high-syntax overhead, perhaps because of the inevitable space between the <- and expr, and that ()<- are all fairly high semantic value currently in Haskell, while this extension should blend in, rather than stand out. The `` syntax is clever, and I like it, but I worry that its quite a long way from the current use of ` as infix, although I'm not sure if that is a particular issue given - (negation/subtraction) and -- (comment) couldn't be more different. Thanks Neil

Neil Mitchell
I think this is a fantastic idea, please do so!
Okay, I'll do it then. If I have a good weekend, perhaps I'll volunteer a talk at AngloHaskell after all! :) So what about syntax? I agree with your objections, so we've got ( <- expr ) -- makes sense, and I think it's unambiguous ``expr`` -- back-ticks make sense for UNIX shell scripters The first is something Simon Peyton-Jones came up with (probably on-the- fly) at OSCON, and I rather like it a lot; but I'm concerned about ambiguity. The latter seems sensible as well. Any other ideas? -- Chris Smith

On Thu, 2007-08-02 at 21:29 -0600, Chris Smith wrote:
Neil Mitchell
wrote: I think this is a fantastic idea, please do so!
Okay, I'll do it then. If I have a good weekend, perhaps I'll volunteer a talk at AngloHaskell after all! :)
So what about syntax? I agree with your objections, so we've got
( <- expr ) -- makes sense, and I think it's unambiguous ``expr`` -- back-ticks make sense for UNIX shell scripters
The first is something Simon Peyton-Jones came up with (probably on-the- fly) at OSCON, and I rather like it a lot; but I'm concerned about ambiguity. The latter seems sensible as well. Any other ideas?
The latter is not sensible to me at all. It doesn't nest well. Neither does the former for that matter, but it forces parenthesizing. You will find that being clear on nesting is very important.

Derek Elkins
( <- expr ) -- makes sense, and I think it's unambiguous ``expr`` -- back-ticks make sense for UNIX shell scripters
The latter is not sensible to me at all. It doesn't nest well.
Ah, excellent point! Okay, it's gone then. Everything will then need some kind of bracketing -- (), [], or {}. I dislike [] out of hand, simply because this has nothing to do with lists.
Neither does the former for that matter, but it forces parenthesizing.
I'm unclear on whether you still have an objection, given that yes it does force parenthesizing. -- Chris Smith

See also this thread
http://www.haskell.org/pipermail/haskell-prime/2007-July/002269.html
Magnus made a TH library that does something similar, see
http://www.haskell.org/pipermail/haskell-prime/2007-July/002275.html
Nesting is important. Consider
do { a <- f x
; b <- g a
; return (2*b) }
Then you'd like to linearise this to give
do { return (2 * $(g $(f x))) }
The hardest thing about this project is finding a suitable syntax! You can't use the same syntax as TH, but it does have a "splice-like" flavour, so something similar would make sense. $[ thing ] perhaps? Or %( thing )? Avoid anything that looks like a TH *quotation* because that suggests the wrong thing. (| thing |) is bad.
A good plan can be to start a Wiki page that describes the problem, then the proposed extension, gives lots of exmaples, etc.
Simon
| -----Original Message-----
| From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Chris
| Smith
| Sent: 03 August 2007 04:30
| To: haskell-cafe@haskell.org
| Subject: [Haskell-cafe] Re: monad subexpressions
|
| Neil Mitchell

Neil Mitchell wrote:
Hi Chris,
I've heard Simon (Peyton-Jones) twice now mention the desire to be able to embed a monadic subexpression into a monad.
I think this is a fantastic idea, please do so!
$( expr ) -- conflicts with template haskell ( <- expr ) -- makes sense, and I think it's unambiguous
Other ideas:
``expr`` -- back-ticks make sense for UNIX shell scripters (| expr |) -- I don't think anything uses this yet
This final (| one |) looks way too much like template haskell, it has the feel of template haskell, even if it isn't yet in the syntax. Your (<- proposal) feels a bit like an operator section - I'm not sure if that is a good thing or a bad thing, but for some reason feels slightly clunky and high-syntax overhead, perhaps because of the inevitable space between the <- and expr, and that ()<- are all fairly high semantic value currently in Haskell, while this extension should blend in, rather than stand out.
I'm not sure I agree with Neil's misgivings. Certainly <- already has a high semantic value, but this is a very closely related notion, so I see that as consistent. As for the (), well as far as I know they only have two meanings: grouping and tupling. This seems like a special case of grouping to me. E.g.: do a <- m b <- n l a x b y becomes l (<- m) x (<- n) y ...with, I suppose, left-to-right evaluation order. This looks 'almost like substitution' which is the goal. Jules

Jules Bean wrote:
do a <- m b <- n l a x b y
becomes
l (<- m) x (<- n) y
...with, I suppose, left-to-right evaluation order. This looks 'almost like substitution' which is the goal.
Having read the thread SPJ pointed to, I should point out that using a mixture of Applicative and Monad notation, this can currently be written as: l <$> m <*> (return x) <*> n =<< (return y) ...where the thing that feels weirdest is having to remember to use =<< instead of <*> for the final 'application'. Jules

Jules Bean wrote:
do a <- m b <- n l a x b y
becomes
l (<- m) x (<- n) y
...with, I suppose, left-to-right evaluation order. This looks 'almost like substitution' which is the goal.
Almost? So then (flip f) (<- m) (<- n) does *not* equal f (<- n) (<- m) ? There goes any hope of my understanding future Haskell code. (<- n) sure looks like an operator section to me, and more importantly a first class Haskell object. What human parsing this would not see a mere function application? And I guess this makes the following complete nonsense: do let a = (<- m) let b = (<- n) l a x b y What about do let (b,a) = ((<- n),(<- m)) -- many lines of code l a x b y Who can say that b was evaluated before a? I hope the language syntax does not evolve beyond my merely mortal ability to desugar it? Dan Weston

On 8/3/07, Jules Bean
do a <- m b <- n l a x b y
becomes
l (<- m) x (<- n) y
Couldn't this be best done with McBride and Patterson's Applicative idiom notation? So the above would become [[l m (pure x) n (pure y)]] (or something like that) It would have the advantage of being usable with any Applicative, not just Monads. -- Dan

Dan Piponi wrote:
On 8/3/07, Jules Bean
wrote: do a <- m b <- n l a x b y
becomes
l (<- m) x (<- n) y
Couldn't this be best done with McBride and Patterson's Applicative idiom notation?
So the above would become
[[l m (pure x) n (pure y)]] (or something like that)
It would have the advantage of being usable with any Applicative, not just Monads.
Well that's exactly the kind of discussion I was trying to generate. And I did give an applicative version when I replied to myself (although not admittedly full scale idiom brackets) Jules

| Couldn't this be best done with McBride and Patterson's Applicative | idiom notation? | | So the above would become | | [[l m (pure x) n (pure y)]] (or something like that) | | It would have the advantage of being usable with any Applicative, not | just Monads. Does anyone have a pointer to a stand-alone description of "full-scale idiom notation". S

Simon Peyton-Jones wrote:
Does anyone have a pointer to a stand-alone description of "full-scale idiom notation".
http://www.haskell.org/haskellwiki/Idiom_brackets I think I've seen something more detailed but I don't know if it was in one of Conor's papers, or if it was personal conversation/ seminar... Jules

On 8/3/07, Simon Peyton-Jones
| Couldn't this be best done with McBride and Patterson's Applicative | idiom notation?
Does anyone have a pointer to a stand-alone description of "full-scale idiom notation". S
The full paper is here: http://www.cs.nott.ac.uk/~ctm/Idiom.pdf Is that what you want? It would be sweet to have the generality of Applicatives. I find the examples of vectorised arithmetic and expression evaluators in that paper quite compelling, besides the use of Applicatives as an alternative way to talk to monads. -- Dan

Chris Smith wrote:
I've heard Simon (Peyton-Jones) twice now mention the desire to be able to embed a monadic subexpression into a monad. That would be http://article.gmane.org/gmane.comp.lang.haskell.prime/2267 and in the recent OSCON video.
I still think that this syntax extension has profound impact and is a bad idea. Simon's and Neill's use case was the dreaded name-supply monad where the order of effects really doesn't matter up to alpha-conversion. The objection to that use case is that monads are not the right abstraction for that, they're too general. Also, a workaround is to lift functions f :: a -> b -> m c g :: d -> m b to f' :: m a -> m b -> m c g' :: m d -> m b and thus flip the need for argument sugar f $(g x) y VS f' (g' (r$ x)) (r$ y) With r = return, the latter is Haskell98. See also http://thread.gmane.org/gmane.comp.lang.haskell.prime/2263/focus=2267
Also, I got so frustrated that I ended up abandoning some code recently because STM is, in the end, so darn hard to use as a result of this issue. I'd love to see this solved, and I'm quite eager to do it.
This sounds suspicious, since the order of effects is of course important in the STM monad. Can you post an example of code you intend to abandon due to ugliness? I'd be astonished if there's no better way to write it. Regards, apfelmus

| > I've heard Simon (Peyton-Jones) twice now mention the desire to be able | > to embed a monadic subexpression into a monad. That would be | > http://article.gmane.org/gmane.comp.lang.haskell.prime/2267 and in the | > recent OSCON video. | | I still think that this syntax extension has profound impact and is a | bad idea. Simon's and Neill's use case was the dreaded name-supply monad | where the order of effects really doesn't matter up to alpha-conversion. | The objection to that use case is that monads are not the right | abstraction for that, they're too general Just for the record, I am not arguing that this is the Right Thing; I am quite agnostic about it. But the status quo doesn't seem that great either, and I'm all for experimentation. Same goes for view patterns and record wildcards, for example. Simon

rewrite *p++=*q++ in haskell?
I always reject such codes when produced by my students. It is just unreadable. I even do not understand what you are trying to achieve. However, gcc seems it to compile to something like *p = *(p+1) ; *q = *(q+1) But for what is the '=' good for? So rewriting it in Haskell (of any size) is a good idea to actually understand the code. Please, could you do it. /BR -- -- Mirko Rahn -- Tel +49-721 608 7504 -- --- http://liinwww.ira.uka.de/~rahn/ ---

Hello Mirko, Friday, August 3, 2007, 3:32:57 PM, you wrote:
rewrite *p++=*q++ in haskell?
I always reject such codes when produced by my students. It is just unreadable.
it's one of C idioms. probably, you don't have enough C experience to understand it :)
So rewriting it in Haskell (of any size) is a good idea to actually understand the code. Please, could you do it.
result is that currently C code rewritten in Haskell becomes much larger and less readable. if you think that readIORef is more readable than *, and x<-readioref v; writeioref v (x+1) is more readable than ++ - it's up to you :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

rewrite *p++=*q++ in haskell?
it's one of C idioms. probably, you don't have enough C experience to understand it :)
Maybe, but how can *you* understand it, when the standard is vague about it? It could be A: *p=*q; p+=1; q+=1; B: *p=*q; q+=1; p+=1; C: tp=p; tq=q; p+=1; q+=1; *tp=*tq; ...and so on. Which is the "right" version?
result is that currently C code rewritten in Haskell becomes much larger and less readable.
Larger should not be that issue and readability depends on the reader as your C example shows. Some Haskellers would very quickly recognize some common idioms, where others need some help... /BR -- -- Mirko Rahn -- Tel +49-721 608 7504 -- --- http://liinwww.ira.uka.de/~rahn/ ---

Hi
Perhaps we need to cool this thread down a little bit, and refocus. I
personally choose never to use ++ as anything but a statement, since
my brain works that way. Other people find different things natural,
so can pick what they choose. The one thing you can guarantee is that
discussing it isn't going to result in anyone changing their opinion!
The thread started out on monad subexpressions, with request for
helpful thoughts as to what could be done with them, and how we can
treat them syntactically. Does anyone have any further thoughts on the
syntax? We started with 4 suggestions, and as far as I can tell, are
left with only one (<- ...). This is the time for people to have new
and clever thoughts, and possibly shape the future of (what I think)
will be a very commonly used Haskell syntax.
For the record, my comments on (<- ...) where not objections, but
merely "thoughts out loud", and I could certainly see myself using
that syntax in a day to day basis.
Thanks
Neil
On 8/3/07, Mirko Rahn
rewrite *p++=*q++ in haskell?
it's one of C idioms. probably, you don't have enough C experience to understand it :)
Maybe, but how can *you* understand it, when the standard is vague about it?
It could be
A: *p=*q; p+=1; q+=1; B: *p=*q; q+=1; p+=1; C: tp=p; tq=q; p+=1; q+=1; *tp=*tq;
...and so on. Which is the "right" version?
result is that currently C code rewritten in Haskell becomes much larger and less readable.
Larger should not be that issue and readability depends on the reader as your C example shows. Some Haskellers would very quickly recognize some common idioms, where others need some help...
/BR
-- -- Mirko Rahn -- Tel +49-721 608 7504 -- --- http://liinwww.ira.uka.de/~rahn/ --- _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi
Thinking on the semantic issue for the moment:
Can you use (<-) outside of a do block?
b >> f (<- a)
What are the semantics of
do b >> f (<- a)
where does the evaluation of a get lifted to?
Given:
if (<- a) then f (<- b) else g (<- c)
Do b and c both get monadic bindings regardless of a?
if (<- a) then do f (<- b) else g (<- c)
Does this change to make b bound inside the then, but c bound outside?
Does this then violate the rule that do x == x
Can you combine let and do?
do let x = (<- a)
f x
Our "best guess" is that all monadic bindings get floated to the
previous line of the innermost do block, in left-to-right order.
Monadic expressions in let statements are allowed. Outside a do block,
monadic subexpressions are banned.
Despite all these complications, it's still a great idea, and would be
lovely to have!
Thanks
Neil and Tom
On 8/3/07, Neil Mitchell
Hi
Perhaps we need to cool this thread down a little bit, and refocus. I personally choose never to use ++ as anything but a statement, since my brain works that way. Other people find different things natural, so can pick what they choose. The one thing you can guarantee is that discussing it isn't going to result in anyone changing their opinion!
The thread started out on monad subexpressions, with request for helpful thoughts as to what could be done with them, and how we can treat them syntactically. Does anyone have any further thoughts on the syntax? We started with 4 suggestions, and as far as I can tell, are left with only one (<- ...). This is the time for people to have new and clever thoughts, and possibly shape the future of (what I think) will be a very commonly used Haskell syntax.
For the record, my comments on (<- ...) where not objections, but merely "thoughts out loud", and I could certainly see myself using that syntax in a day to day basis.
Thanks
Neil
On 8/3/07, Mirko Rahn
wrote: rewrite *p++=*q++ in haskell?
it's one of C idioms. probably, you don't have enough C experience to understand it :)
Maybe, but how can *you* understand it, when the standard is vague about it?
It could be
A: *p=*q; p+=1; q+=1; B: *p=*q; q+=1; p+=1; C: tp=p; tq=q; p+=1; q+=1; *tp=*tq;
...and so on. Which is the "right" version?
result is that currently C code rewritten in Haskell becomes much larger and less readable.
Larger should not be that issue and readability depends on the reader as your C example shows. Some Haskellers would very quickly recognize some common idioms, where others need some help...
/BR
-- -- Mirko Rahn -- Tel +49-721 608 7504 -- --- http://liinwww.ira.uka.de/~rahn/ --- _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Neil Mitchell
Thinking on the semantic issue for the moment:
Can you use (<-) outside of a do block?
Good question, but my answer is a strong no! Syntactic sugar for monads has always been tied to do blocks; promoting it outside of contexts where "do" announces that you'll be using syntactic sugar seems like a very bad idea.
do b >> f (<- a)
where does the evaluation of a get lifted to?
I think it's rather clear that a gets moved before b. The example is confusing because the code is bad; not because of any new problems with this proposal.
Given:
if (<- a) then f (<- b) else g (<- c)
Do b and c both get monadic bindings regardless of a?
This is tougher, but I'd say yes. In this case, you've chosen not to give "then" and "else" clauses their own do block, so this would evaluate both. Certainly if/then could be made a special case... but it would be exactly that. i.e., if I did this: cond b th el = if b then th else el do cond (<- a) (f (<- b)) (g (<- c)) Then you'd lose. And the fact that you'd still lose there makes me less than thrilled to mislead people by special-casing if/then/else. When something is dangerous, it should be labelled as such as loudly talked about; but covered up in the hopes that no one will dig deep enough to hurt themselves.
if (<- a) then do f (<- b) else g (<- c)
Does this change to make b bound inside the then, but c bound outside? Does this then violate the rule that do x == x
Then yes, it would.
Can you combine let and do?
do let x = (<- a) f x
Right. In effect, as a matter of fact, the notation x <- a would become equivalent to let x = (<- a)
Our "best guess" is that all monadic bindings get floated to the previous line of the innermost do block, in left-to-right order. Monadic expressions in let statements are allowed. Outside a do block, monadic subexpressions are banned.
Sure. SPJ mentioned that you wouldn't promote (<- x) past a lambda. I'm not convinced (it seems to fall into the same category as the if statement), but it's worth considering. -- Chris Smith

Hi
Can you combine let and do?
do let x = (<- a) f x
Right. In effect, as a matter of fact, the notation
x <- a
would become equivalent to
let x = (<- a)
Hmm, interesting. Consider: let x = 12 let x = (<- x) Currently, in let x = ... the x is in scope on the right hand side. Now it isn't. Changing the order of evaluation with syntactic sugar seems fine, changing the lexical scoping seems nasty. Perhaps this is a reason to disallow monadic expressions in a let.
Our "best guess" is that all monadic bindings get floated to the previous line of the innermost do block, in left-to-right order. Monadic expressions in let statements are allowed. Outside a do block, monadic subexpressions are banned.
Sure. SPJ mentioned that you wouldn't promote (<- x) past a lambda. I'm not convinced (it seems to fall into the same category as the if statement), but it's worth considering.
I'm not convinced either, a nice concrete example would let people ponder this a bit more. What is nice to note is that all your answers to my questions matched perfectly with what I thought should happen. Thanks Neil

Neil Mitchell
Right. In effect, as a matter of fact, the notation
x <- a
would become equivalent to
let x = (<- a)
Hmm, interesting. Consider:
let x = 12 let x = (<- x)
Okay, so the desugaring process wouldn't terminate in that case! One could either: (a) try to retain the equivalence in theory, but make it illegal to use x in a monadic subexpression when defining x; (b) we could abandon my claim that they are equivalent.
I'm not convinced either, a nice concrete example would let people ponder this a bit more.
I tried to provide something in my response to Simon. Here it is again: One could sugar: do tax <- getTax return $ map (\price -> price * (1 + tax)) bill into: do return $ map (\price -> price * (1 + (<- getTax))) someNums
What is nice to note is that all your answers to my questions matched perfectly with what I thought should happen.
That is nice. I'm still very uncomfortable with the <- syntax (a complete flip for me since this morning!); and a little uneasy about the use of case, if, lambdas, etc. Time to keep thinking, I guess. I'd like to take Simon's suggestion and do a wiki page about this; but it should probably be on the Haskell prime wiki, no? I'm not entirely clear on how to get an account there. I could add it to HaskellWiki, but I think that would be the wrong place for it. -- Chris Smith

On 8/3/07, Chris Smith
Neil Mitchell
wrote: I'm not convinced either, a nice concrete example would let people ponder this a bit more.
I tried to provide something in my response to Simon. Here it is again:
One could sugar:
do tax <- getTax return $ map (\price -> price * (1 + tax)) bill
into:
do return $ map (\price -> price * (1 + (<- getTax))) someNums
I think what Simon is worried about here is that the syntax in the latter expression suggests that the effects of getTax will be performed every time the lambda is applied. After all getTax appears inside the lambda. But in fact is the side effects will only be performed once. I agree with Simon that (<- getTax) shouldn't be promoted outside a lambda. Fwiw, I'm all in favor for some new piece of syntax for this problem. Cheers, Josef

Hi
let x = 12 let x = (<- x)
Okay, so the desugaring process wouldn't terminate in that case! One could either: (a) try to retain the equivalence in theory, but make it illegal to use x in a monadic subexpression when defining x; (b) we could abandon my claim that they are equivalent.
This example isn't intended to be about termination of the desugaring, or about types etc - the only point is to note the change in the lexical scoping rules that (<-) gives. I'll try and state my concern more clearly: let x = a In this expression, x is available for use within a, since let is recursive. This allows us to write: let xs = "paws" : xs With the end result that xs is bound to ["paws","paws","paws","paws"... Now consider: let x = (<- a) With the proposed desugaring we obtain: temp <- a let x = temp Now x is NOT in scope within the expression a! We have changed the static lexical scoping, and only within the brackets. This behaviour is (in my opinion) horrid. A quick poll of people in my office lead us all to believe that this issue means you should not be allowed (<-) within a do's let statement. This leads us to a second problem, floating these monadic expressions outside any binding: do case x of [] -> return 1 (y:ys) -> f (<- g y) Here, the proposed desugaring does not work, since y is not in scope where we move the element to. Perhaps this leads to the conclusion that monadic subexpressions should not be allowed inside any binding group, including let, case or lambda. Thanks Neil

On 8/3/07, Neil Mitchell
temp <- a let x = temp
if you write : let x = (<-a):x is it possible that is desugars into : temp <-a let x = temp:x that would'nt work ? I realize I may be asking dumb questions but being dumb never harmed anyone so :) Also :
do case x of [] -> return 1 (y:ys) -> f (<- g y)
Is it not possible that is desugars to do case x of [] -> return 1 (y:ys) -> g y >>= \temp -> f temp
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hi
if you write :
let x = (<-a):x
is it possible that is desugars into :
temp <-a let x = temp:x
that would'nt work ?
That would work, since 'a' doesn't refer to 'x'. I can't think of a real example where it becomes an issue, but the scope within 'a' has changed.
Also :
do case x of [] -> return 1 (y:ys) -> f (<- g y)
Is it not possible that is desugars to
do case x of [] -> return 1 (y:ys) -> g y >>= \temp -> f temp
See the rule about always binding to the previous line of a do block. This case then violates that. Thanks Neil

On 8/3/07, Neil Mitchell
Is it not possible that is desugars to
do case x of [] -> return 1 (y:ys) -> g y >>= \temp -> f temp
See the rule about always binding to the previous line of a do block. This case then violates that.
I assumed that the example was equivalent to : do case x of [] -> return 1 (y:ys) -> do f (<- g y) Shouldn't the rule work then ?

Hi
do case x of [] -> return 1 (y:ys) -> g y >>= \temp -> f temp
See the rule about always binding to the previous line of a do block. This case then violates that.
I assumed that the example was equivalent to :
do case x of [] -> return 1 (y:ys) -> do f (<- g y)
Shouldn't the rule work then ?
If the do was inserted, then yes, this would work. Without it, it doesn't. Perhaps this makes a restriction to not inside case/let/lambda not that severe, since usually an additional do could be inserted. Thanks Neil

On 8/3/07, Neil Mitchell
Can you use (<-) outside of a do block? b >> f (<- a)
b >> do { ta <-a; f ta } or b >> a >>= \ta -> f ta
What are the semantics of do b >> f (<- a)
do b >> a >>= \ta -> f ta
Given:
if (<- a) then f (<- b) else g (<- c)
a >>= \ta -> if (ta) then ( b >>= \tb -> f tb ) else ( c >>= \tc -> f tc )
do let x = (<- a) f x
No idea if that could be possible. or maybe : do a >>= \ta -> let x = ta in f x _______________________________________________
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Neil Mitchell
We started with 4 suggestions, and as far as I can tell, are left with only one (<- ...).
For the record, my comments on (<- ...) where not objections, but merely "thoughts out loud", and I could certainly see myself using that syntax in a day to day basis.
Right, I definitely didn't read your post as objecting to the syntax. I do have concerns about it. In particular, the section-like syntax suggests to me (quite misleadingly) that it is somewhat self-contained. I find myself half expecting to be able to rewrite (mapM f xs) as (map (<- f) xs), or something like that. In other words, the syntax lies to me. At the moment, though, I can't think of anything better. -- Chris Smith

Hello Mirko, Friday, August 3, 2007, 4:41:05 PM, you wrote:
result is that currently C code rewritten in Haskell becomes much larger and less readable.
Larger should not be that issue and readability depends on the reader as your C example shows. Some Haskellers would very quickly recognize some common idioms, where others need some help...
probably Turing machine is your favorite PL - it has simple and concise semantics :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Fri, Aug 03, 2007 at 02:41:05PM +0200, Mirko Rahn wrote:
rewrite *p++=*q++ in haskell?
it's one of C idioms. probably, you don't have enough C experience to understand it :)
Maybe, but how can *you* understand it, when the standard is vague about it?
It could be
A: *p=*q; p+=1; q+=1; B: *p=*q; q+=1; p+=1; C: tp=p; tq=q; p+=1; q+=1; *tp=*tq;
...and so on. Which is the "right" version?
Isn't that the point? It's buggy code if *q == p or *p == q, or a few other cases perhaps, but if those are not the case, then all of those are "right," and the compiler has the choice to implement whichever it deems most efficient. In the cases where this is actually used, all three of those are correct, the code is understandable, compact and unambiguous. -- David Roundy Department of Physics Oregon State University

On 4 Aug 2007, at 12:41 am, Mirko Rahn wrote:
rewrite *p++=*q++ in haskell?
it's one of C idioms. probably, you don't have enough C experience to understand it :)
Maybe, but how can *you* understand it, when the standard is vague about it?
It could be
A: *p=*q; p+=1; q+=1; B: *p=*q; q+=1; p+=1; C: tp=p; tq=q; p+=1; q+=1; *tp=*tq;
...and so on. Which is the "right" version?
The standard makes it perfectly clear that they ALL are, as they all produce exactly the same effect. The only case where the exact translation could matter is when the two variables are the same, which the standard forbids (and yes, there are static checkers for that, and smart C programmers use them). Some message I didn't see suggested that good C programmers use memcpy() rather than *p++ = *q++. I would point out that the assignment form is type checked and memcpy() is not, so memcpy() is not always to be preferred.
readability depends on the reader as your C example shows. Some Haskellers would very quickly recognize some common idioms, where others need some help...
*p++ = *q++ is indeed a C idiom and it is presented and described in the classic C book by Kernighan & Ritchie, which one expects competent C programmers to have read. C does indeed have grave weaknesses, but there are better targets for derision than this.

rewrite *p++=*q++ in haskell?
MR> I always reject such codes when produced by my students. It is just MR> unreadable. I even do not understand what you are trying to achieve. MR> However, gcc seems it to compile to something like MR> *p = *(p+1) ; *q = *(q+1) MR> But for what is the '=' good for? MR> So rewriting it in Haskell (of any size) is a good idea to actually MR> understand the code. Please, could you do it. MR> /BR

rewrite *p++=*q++ in haskell?
MR> I always reject such codes when produced by my students. I don't think it's a good idea to reject working code. MR> I even do not understand what you are trying to achieve. Well, that just means that your students are a bit smarter than you. And I'd like to ensure you, they know this and are considering you as a person who is afraid of smart people.

can you please rewrite *p++=*q++ in haskell?
assuming these operations i :: V a -> IO (V a) -- incr var addr, return old addr r :: V a -> IO a -- read var w :: V a -> a -> IO () -- write var value and this unfolded translation do { qv <- r q; w p qv; i p; i q } assuming further these liftings ap1 :: (a->m b) -> (m a->m b) ap2 :: (a->b->m c) -> (m a->m b->m c) then we can define (=:) :: IO (V a) -> IO a -> IO () mv =: ma = (ap2 w) mv ma and get this inlined version i p =: (r `ap1` i q) but one might still prefer do { w p =<< r q; i p; i q } but whatever line-noise one prefers, this still seems a call for better combinators in the standard libs, rather than a call for more syntax. claus

Hello Claus, Friday, August 3, 2007, 5:12:26 PM, you wrote:
can you please rewrite *p++=*q++ in haskell? do { w p =<< r q; i p; i q }
how about *Object.File.Line.CurPtr++ = *AnotherObject.File.Line.CurPtr++ ? ;)
but whatever line-noise one prefers, this still seems a call for better combinators in the standard libs, rather than a call for more syntax.
the problem with Haskell is that we need to split C expression into several statements and explicitly specify execution order even when we know that it doesn't matter. ideally, it should be possible to define ++x = modifyIORef x (+1) >> readIORef x *x = readIORef x and know that ghc will automatically generate temporary variables for results of monadic operations, understand the code and optimize it the sole reason why it's required for me is writing imperative software. while some purists may believe that haskell doesn't need imperative code, it's part of my program/libs and i want to have simple and concise representation for it -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

can you please rewrite *p++=*q++ in haskell? do { w p =<< r q; i p; i q }
how about *Object.File.Line.CurPtr++ = *AnotherObject.File.Line.CurPtr++ ? ;)
what's the difference?-) let p = Object.File.Line.CurPtr let q = AnotherObject.File.Line.CurPtr do { w p =<< r q; i p; i q } the other line-noise version i gave does't even need the lets to avoid duplicating long names: i Object.File.Line.CurPtr =: (r `ap1` i AnotherObject.File.Line.CurPtr) but this is rapidly approaching the level at which my brain calls for a separation of concerns. there are oneliners that make code more obvious, and there are oneliners that make code harder to read. and definitions of hard/obvious differ..
the problem with Haskell is that we need to split C expression into several statements and explicitly specify execution order even when we know that it doesn't matter. ideally, it should be possible to define
++x = modifyIORef x (+1) >> readIORef x *x = readIORef x
apart from the prefix symbols (i used one-letter prefix names), you can (as i'm sure you know). and the point of my little exercise was to show that instead of doing the splitting by hand at each usage site, we can write lifting combinators that do the splitting behind the scenes. what gives haskell aspirations to be a fine imperative language is that its abstraction mechanisms work as well for imperative code as for functional code. claus

Hello Claus, Friday, August 3, 2007, 7:29:32 PM, you wrote:
how about *Object.File.Line.CurPtr++ = *AnotherObject.File.Line.CurPtr++ ? ;)
what's the difference?-)
let p = Object.File.Line.CurPtr let q = AnotherObject.File.Line.CurPtr do { w p =<< r q; i p; i q }
back to the assembler future? :) so-called high-level languages started with the idea that you don't need to give explicit names to intermediate results
the problem with Haskell is that we need to split C expression into several statements and explicitly specify execution order even when we know that it doesn't matter. ideally, it should be possible to define
++x = modifyIORef x (+1) >> readIORef x *x = readIORef x
apart from the prefix symbols (i used one-letter prefix names), you can (as i'm sure you know). and the point of my little exercise was to show that instead of doing the splitting by hand at each usage site, we can write lifting combinators that do the splitting behind the scenes. what gives haskell aspirations to be a fine imperative language is that its abstraction mechanisms work as well for imperative code as for functional code.
can you give translation you mean? i don't have anything against combinators, they just need to be easy to use, don't forcing me to think where i should put one, as i don't think with lazy code and C imperative code. and they shouldn't clatter the code, too. just try to write complex expression using C and these combinators -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

can you give translation you mean? i don't have anything against combinators, they just need to be easy to use, don't forcing me to think where i should put one, as i don't think with lazy code and C imperative code. and they shouldn't clatter the code, too. just try to write complex expression using C and these combinators
perhaps we're misunderstanding each other? if i define a monadic assignment operator lifted over monadic lhs/rhs, i can already have side-effecting lhs/rhs, including post-increments and content lookup. that's what that example demonstrated, translating everything you asked for. one can do the same with other operations, such as lifting numeric operations over monadic operands (though the current class hierarchy may require some ugly dummy class instances for that; also, non-overloaded Bool always requires some workaround). what is it that you think is missing? claus

Hello Claus, Saturday, August 4, 2007, 3:06:11 PM, you wrote:
can you give translation you mean? i don't have anything against combinators, they just need to be easy to use, don't forcing me to think where i should put one, as i don't think with lazy code and C imperative code. and they shouldn't clatter the code, too. just try to write complex expression using C and these combinators
perhaps we're misunderstanding each other? if i define a monadic assignment operator lifted over monadic lhs/rhs, i can already have side-effecting lhs/rhs, including post-increments and content lookup. that's what that example demonstrated, translating everything you asked for. one can do the same with other operations, such as lifting numeric operations over monadic operands (though the current class hierarchy may require some ugly dummy class instances for that; also, non-overloaded Bool always requires some workaround). what is it that you think is missing?
i know that it may be trsanslated to everything including pure assembler. what i'm missing in current Haskell is USEFUL SYNTAX for these expressions. adding tons of liftM and ap can't make me happy -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

i know that it may be trsanslated to everything including pure assembler. what i'm missing in current Haskell is USEFUL SYNTAX for these expressions. adding tons of liftM and ap can't make me happy
but the point is that you have a standard set of operations when working at that level, including conditionals, assignments, pointer increments, read/write, etc. you only need to define lifted variants of each of those operations *once*, in a library. when you use those lifted variants, you can (actually: you have to) use them with monadic parameters, and no need for liftM/ap. liftM/ap are useful, but need to appear in application code only when you do not know in advance what set of operations you'll need, as you can then lift any operation on the fly. so, there could be a library defining lhs ==: rhs = putMVar <$> lhs <*> rhs and in your application code, you could write newEmptyMVar ==: putStrLn "hi there" (not that this would be useful;-) claus

Hello Claus, Saturday, August 4, 2007, 6:57:13 PM, you wrote:
so, there could be a library defining
lhs ==: rhs = putMVar <$> lhs <*> rhs
and in your application code, you could write
newEmptyMVar ==: putStrLn "hi there"
(not that this would be useful;-)
it's great! how fools are invented fortran! anyone using macroassembler can define macros for any shape of expression and use them as they need. for example, instead of writing a=b*c+d it's much easier to define macro abcd macro a,b,op1,c,op2,d mov r1, b op1 r1, c op2 r1, d mov a, r1 endm and use it. want to assign a=b/(c+d)? nothing can be easier! just define one more macro! -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Aug 4, 2007, at 11:19 , Bulat Ziganshin wrote:
and use it. want to assign a=b/(c+d)? nothing can be easier! just define one more macro!
And? Everything above machine code is just "macros" at various levels of abstraction, including all our favorite higher-level abstractions. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Hello Brandon, Saturday, August 4, 2007, 7:27:16 PM, you wrote:
On Aug 4, 2007, at 11:19 , Bulat Ziganshin wrote:
and use it. want to assign a=b/(c+d)? nothing can be easier! just define one more macro!
And? Everything above machine code is just "macros" at various levels of abstraction, including all our favorite higher-level abstractions.
and what you prefer? assembler or high-level language? -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Aug 4, 2007, at 11:48 , Bulat Ziganshin wrote:
Hello Brandon,
Saturday, August 4, 2007, 7:27:16 PM, you wrote:
On Aug 4, 2007, at 11:19 , Bulat Ziganshin wrote:
and use it. want to assign a=b/(c+d)? nothing can be easier! just define one more macro!
And? Everything above machine code is just "macros" at various levels of abstraction, including all our favorite higher-level abstractions.
and what you prefer? assembler or high-level language?
That would be why I'm using a language which lets me compose things in complex ways. And just once, abstracting it away into a library, which you seem to be missing. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

Hello Brandon, Saturday, August 4, 2007, 8:44:46 PM, you wrote:
That would be why I'm using a language which lets me compose things in complex ways. And just once, abstracting it away into a library, which you seem to be missing.
and you hate 'do' syntax sugar? -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Aug 4, 2007, at 14:51 , Bulat Ziganshin wrote:
Hello Brandon,
Saturday, August 4, 2007, 8:44:46 PM, you wrote:
That would be why I'm using a language which lets me compose things in complex ways. And just once, abstracting it away into a library, which you seem to be missing.
and you hate 'do' syntax sugar?
Not particularly; I use both do and >>= (even intermixing them), although I'm uncertain that *teaching* Haskell should start with the "do" notation. Your point being? -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

so, there could be a library defining
lhs ==: rhs = putMVar <$> lhs <*> rhs
ouch! since putMVar is already impure, there's a join missing: lhs ==: rhs = putMVar <$> lhs <*> rhs
and in your application code, you could write
newEmptyMVar ==: putStrLn "hi there"
.. rant deleted .. and use it. want to assign a=b/(c+d)? nothing can be easier! just define one more macro!
Dear Bulat in your enthusiam, please do not forget to read what is written! the lifted operations combine as the unlifted ones do. so there's one definition each for =, /, +, not one definition for each of their combinations. claus

Hello Claus, Saturday, August 4, 2007, 7:55:18 PM, you wrote:
so, there could be a library defining
lhs ==: rhs = putMVar <$> lhs <*> rhs
the lifted operations combine as the unlifted ones do. so there's one definition each for =, /, +, not one definition for each of their combinations.
it's called doublethinking :) when you count operations, you count only primitive ones. when you say about easiness of programming, you propose to define special operation for each access pattern. it's obvious for you that using only standard operations, it's hard to read and write code, and using special operations, you will need to define special one for each usage pattern -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hi Claus,
but the point is that you have a standard set of operations when working at that level, including conditionals, assignments, pointer increments, read/write, etc. you only need to define lifted variants of each of those operations *once*, in a library.
I think that defining lifted versions of every function is dangerous, especially in a widely-used library. Monadic code will start to look pure, and before long someone will be using let expressions and where blocks to share monadic computations rather than using do blocks to share the *results* of monadic computations. Matt.

I think that defining lifted versions of every function is dangerous, especially in a widely-used library. Monadic code will start to look pure, and before long someone will be using let expressions and where blocks to share monadic computations rather than using do blocks to share the *results* of monadic computations.
yes. we actually had that fun with Conal Elliott's functional reactive programming libraries, where all expressions were lifted to a reader monad for time (expressions not mentioning time were constant, those mentioning time were dependent on the overall progress of time). the limitations of overloading (Bool, mostly) and the variations of sharing possible in expressions overloaded this way led to quite a bit of research as well as implementation and language extensions. it is a borderline case: it results in an embedded domain-specific language that looks a lot like haskell, but isn't haskell. as long as one keeps the difference in mind, it is useful, though. having such overloaded operations in an edsl for low-level imperative programming in haskell might be worthwhile, and since some people have been asking for it, i wanted to point out that it is possible. for general use, i agree that explicit control (using idioms perhaps) is safer. although there are functional languages that are based on the everything is monadic-io slogan (scheme, lisp, mls,..). the monad subexpressions under discussion are somewhere in between those extremes, with some syntactic differences, some control, and their own complications. claus

apfelmus
I still think that this syntax extension has profound impact and is a bad idea. Simon's and Neill's use case was the dreaded name-supply monad where the order of effects really doesn't matter up to alpha-conversion. The objection to that use case is that monads are not the right abstraction for that, they're too general.
I'm primarily interested in the two cases where one simply has no choice about the use of monads: and those are IO and STM. No, this is not purely functional programming then; but it has some very compelling advantages to Haskell's implementation of these, that I'm afraid are currently quite hidden behind the difficult syntax. Using something besides a monad is simply not an option. A lot of what I'm thinking about Haskell now comes from my experience in trying to teach it to new programmers (which in turn comes from it being lonenly to be the only person I talk to that knows Haskell). I'm quite convinced, right now, that one huge problem with adoption of Haskell has to do with this right here. If there's a way to get nice syntax without modifying the compiler, that is certainly an advantage; but I do see it as rather small compared to the goal of producing something that it rather simple to understand and use. I can explain desugaring rules for this idea in a short paragraph. The alternatives all seem to involve operators and functions that I've not used in about six months or more of moderate playing around with Haskell. Type class hacking is way over the top; other ideas seem reasonable to me, but I'm concerned they won't seem very reasonable to anyone with much less experience using Haskell than I've got. The other objection, though, and I'm quoting from a post in a past thread on this, is something like, "The more tiresome monads are, the more incentive you have to avoid them." Unfortunately, I'm afraid this cheapens work people are doing in making the necessary imperative parts of Haskell more useful and interesting. Making monads distasteful is not a reasonable goal.
Also, I got so frustrated that I ended up abandoning some code recently because STM is, in the end, so darn hard to use as a result of this issue. I'd love to see this solved, and I'm quite eager to do it.
This sounds suspicious, since the order of effects is of course important in the STM monad. Can you post an example of code you intend to abandon due to ugliness? I'd be astonished if there's no better way to write it.
I'll dig for it later if you like. The essence of the matter was a bunch of functions that looked something like this: foo = do b' <- readTVar b c' <- readTVar c d' <- readTvar d return (b' + c' / d') In other words, a string of readTVar statements, followed by one computation on the results. Each variable name has to be duplicated before it can be used, and the function is four lines long instead of one. It's true that order of effects *can* be important in monads like IO and STM. It's also true, though, that probably 50% of the time with IO, and 95% with STM, the order does not actually matter. Taking a hard-line approach on this and forcing a linear code structure is equivalent to ignoring what experience has taught in dozens of other programming languages. Can you think of a single widely used programming language that forces programmers to write linear one-line-per-operation code like this? IMO, Haskell gets away with this because STM and IO stuff isn't very common; and STM and IO will remain uncommon (and will instead be replaced by unsafe code written in Python or Ruby) as long as this is the case. I find it hard to speculate that Haskell programmers will understand the alternatives, but won't understand something like "monadic subexpressions are evaluated in the order of their closing parentheses". -- Chris Smith

Hello Chris, Friday, August 3, 2007, 8:09:49 PM, you wrote:
foo = do b' <- readTVar b c' <- readTVar c d' <- readTvar d return (b' + c' / d')
It's true that order of effects *can* be important in monads like IO and STM. It's also true, though, that probably 50% of the time with IO, and
90%, in my programs at least
95% with STM, the order does not actually matter. Taking a hard-line approach on this and forcing a linear code structure is equivalent to ignoring what experience has taught in dozens of other programming languages. Can you think of a single widely used programming language that forces programmers to write linear one-line-per-operation code like this?
assembler :) it's what our opponents propose - let's Haskell be like assembler with its simple and concise execution model :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat Ziganshin
assembler :) it's what our opponents propose - let's Haskell be like assembler with its simple and concise execution model :)
I feel bad that portions of this thread have gotten a bit ugly. I don't have any opponents, so far as I know. I am just trying to discuss the best way to solve this problem. -- Chris Smith

I'll dig for it later if you like. The essence of the matter was a bunch of functions that looked something like this:
foo = do b' <- readTVar b c' <- readTVar c d' <- readTvar d return (b' + c' / d')
In other words, a string of readTVar statements, followed by one computation on the results. Each variable name has to be duplicated before it can be used, and the function is four lines long instead of one.
if that happens frequently, an instance of the numeric classes seems called for, automating both the lifting and the readTVar, but if there are only a couple of cases, you could lift the operations for the module, or even per definition: foo1 b c d = readTVar b + readTVar c / readTVar d where (+) = liftM2 (Prelude.+) (/) = liftM2 (Prelude./) claus

On 03/08/07, apfelmus
Chris Smith wrote:
Also, I got so frustrated that I ended up abandoning some code recently because STM is, in the end, so darn hard to use as a result of this issue. I'd love to see this solved, and I'm quite eager to do it.
This sounds suspicious, since the order of effects is of course important in the STM monad. Can you post an example of code you intend to abandon due to ugliness? I'd be astonished if there's no better way to write it.
Just because order *technically* matters doesn't mean it *actually* matters in a given circumstance: mytransaction = do { x0 <- readTVar xvar0 x1 <- readTVar xvar1 : xn <- readTVar xvarn return $ foo x0 x1 .. xn } Versus mytransaction = return $ foo $(readTVar xvar0) $(readTVar xvar1) .. $(readTVar xvarn) Now I'm not to happy about the long names for reading variables either, short overloaded names like "get" and "put" would look much nicer in this example, and in other places too. And certainly, sometimes you do want to name things for convenience. But in *lots* of cases you just want to e.g. read N variables, in an arbitrary order, and then do something with them. Yes the order matters to the *compiler*, but it doesn't always matter to the *programmer*, so to have a more convenient way to express those cases would be very nice, IMO. And there may even be cases where the order does matter but you'd be happy with a left-to-right ordering. This has been a pet-peeve of mine for ages. Imperative programming in Haskell is neat, but I really don't want to write what amounts to almost assembly programming levels of explicitness for simple tasks. I'd also like to reiterate my request for a notation that doesn't require brackets around the *action* but will also work by applying it to a function which when fully applied to its argument returns an action (i.e.: $foo x y + $bar z w, rather than $(foo x y) + $(bar z w)). Function application is normally very low-noise in Haskell (good), and it would be nice if we can keep it low-noise in this notation too. Maybe $ isn't a good operator though.. How about #? Maybe using angle brackets would work.. I'd still like to have them work for functions returning actions though ( <foo> x y + <bar> z w ). Wonder what that would do to ordering comparisons, lexically speaking.... -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

Sebastian Sylvan
I'd also like to reiterate my request for a notation that doesn't require brackets around the *action* but will also work by applying it to a function which when fully applied to its argument returns an action (i.e.: $foo x y + $bar z w, rather than $(foo x y) + $(bar z w)). Function application is normally very low-noise in Haskell (good), and it would be nice if we can keep it low-noise in this notation too.
I'm trying to understand your suggestion. Can you tell me how you'd sugar the following? getA :: Friggle MyA getB :: Friggle MyB foo :: Int -> MyB -> Friggle MyC do a <- getA b <- getB a foo 42 b -- Chris Smith

On 03/08/07, Chris Smith
Sebastian Sylvan
wrote: I'd also like to reiterate my request for a notation that doesn't require brackets around the *action* but will also work by applying it to a function which when fully applied to its argument returns an action (i.e.: $foo x y + $bar z w, rather than $(foo x y) + $(bar z w)). Function application is normally very low-noise in Haskell (good), and it would be nice if we can keep it low-noise in this notation too.
I'm trying to understand your suggestion. Can you tell me how you'd sugar the following?
getA :: Friggle MyA getB :: Friggle MyB foo :: Int -> MyB -> Friggle MyC
do a <- getA b <- getB a foo 42 b
Something like: foo 42 (#getB #getA)? Is there an ambiguity that I'm to dense to see here? :-) -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

mytransaction = do { x0 <- readTVar xvar0 x1 <- readTVar xvar1 : xn <- readTVar xvarn return $ foo x0 x1 .. xn }
Versus
mytransaction = return $ foo $(readTVar xvar0) $(readTVar xvar1) .. $(readTVar xvarn)
ah, a concrete example. but isn't that the typical use case for ap? mytransaction = foo `liftM` r xvar0 `ap` r xvar1 .. where r = readTVar claus

On 03/08/07, Claus Reinke
mytransaction = do { x0 <- readTVar xvar0 x1 <- readTVar xvar1 : xn <- readTVar xvarn return $ foo x0 x1 .. xn }
Versus
mytransaction = return $ foo $(readTVar xvar0) $(readTVar xvar1) .. $(readTVar xvarn)
ah, a concrete example. but isn't that the typical use case for ap?
mytransaction = foo `liftM` r xvar0 `ap` r xvar1 .. where r = readTVar
I really find it difficult to articulate why this isn't acceptable, because it seems so obvious to me! It's short yes, but I really don't think it's very clear... I have a hard time believing that anyone finds that natural. After lots and lots of mind-bending forays into various branches of mathematics, then yes maybe you can get used to it, but it's hardly as natural as saying "add this one symbol to your values to extract monadic values left-to-right". -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

On Friday 03 August 2007, Sebastian Sylvan wrote:
On 03/08/07, Claus Reinke
wrote: ah, a concrete example. but isn't that the typical use case for ap?
mytransaction = foo `liftM` r xvar0 `ap` r xvar1 .. where r = readTVar
I really find it difficult to articulate why this isn't acceptable, because it seems so obvious to me! It's short yes, but I really don't think it's very clear... I have a hard time believing that anyone finds that natural. After lots and lots of mind-bending forays into various branches of mathematics, then yes maybe you can get used to it, but it's hardly as natural as saying "add this one symbol to your values to extract monadic values left-to-right".
Note that if this is the example we're using, idiom brackets solve things: mytransaction = [[ foo (r xvar0) (r xvar1) ...]] where r = readTVar and are, possibly, less fraught with peril, considering all the discussions about where the desugaring should place the implicit binding, and what happens if there isn't an enclosing do and so on (as idiom brackets desugar to the "foo `liftM` r xvar0 `ap` r xvar1 ..." mentioned above, and the entire expression is delimited, there are no such questions to be pondered, I think). Also, note, if you use the operators in Control.Applicative, then: return $ foo $(bar1) $(bar2) $(bar3) ... can be: return foo <*> bar1 <*> bar2 <*> bar3 ... or: foo <$> bar1 <*> bar2 <*> bar3 I don't (personally) see how that's any more cryptic than placing brackets around around the monadic values themselves. In either case, there's some magic going on that the user may or may not understand. In the applicative case, it's using a different kind of (Monadic/Applicative) function application via an operator. In the monad brackets case, it's doing a macro expansion. I, personally find the former clearer, but perhaps that's because I understand Applicative fairly well, but only have a vague idea of what, specifically, the macro will be doing so far. To get outside the scope of idiom brackets/applicative, you'd need a use case like: if $(mexpr) then branch1 else branch2 or (lest that be to easy): case $(mexpr) of p1 -> branch1 p2 -> branch2 ... In other words, something where you're not simply applying a pure function to a bunch of monadic arguments. I can't say I've run into such patterns much myself, but I've been told they're common in xmonad, and may be elsewhere. In general, I guess you'd need the monad brackets when you'd need to interact with other syntax (since it isn't first-class). Record update would probably be another example. But applications of pure functions to monadic values doesn't seem like a particularly compelling motivator, in my opinion. -- Dan

<much snipping>
Also, note, if you use the operators in Control.Applicative, then:
return $ foo $(bar1) $(bar2) $(bar3) ...
can be:
return foo <*> bar1 <*> bar2 <*> bar3 ...
or:
foo <$> bar1 <*> bar2 <*> bar3
I don't (personally) see how that's any more cryptic than placing brackets around around the monadic values themselves. ...
Seconded. The main difference with brackes is that the application to pure values looks the same as normal application.
To get outside the scope of idiom brackets/applicative, you'd need a use case like:
if $(mexpr) then branch1 else branch2
or (lest that be to easy):
case $(mexpr) of p1 -> branch1 p2 -> branch2 ...
In other words, something where you're not simply applying a pure function to a bunch of monadic arguments. I can't say I've run into such patterns much myself, but I've been told they're common in xmonad, and may be elsewhere.
General purpose brackets are overkill here. I would really like a simple monadic case. What's so bad about caseM mexpr of p1 -> branch1 p2 -> branch2 vvvv (mexpr >>= \e -> case e of p1 -> branch1 p2 -> branch2) It's simple sugar for working with monadic code, much like do notation. (indeed, it seems to plug a gap - we have do for sequencing, liftM and so on for application, but no sugar for case discrimination) It's a much simpler sort of thing than this fancy sugar for intermixing code in various monads people have been talking about (so far it seems assumed that one is just Identity...) Brandon

On Fri, Aug 03, 2007 at 05:48:18PM -0700, Brandon Michael Moore wrote:
General purpose brackets are overkill here. I would really like a simple monadic case. What's so bad about
caseM mexpr of p1 -> branch1 p2 -> branch2
vvvv
(mexpr >>= \e -> case e of p1 -> branch1 p2 -> branch2)
It's simple sugar for working with monadic code, much like do notation. (indeed, it seems to plug a gap - we have do for sequencing, liftM and so on for application, but no sugar for case discrimination)
It's a much simpler sort of thing than this fancy sugar for intermixing code in various monads people have been talking about (so far it seems assumed that one is just Identity...)
I think the CaseLambda proposal on the Haskell' wiki solves this one nicely. mexpr >>= case of p1 -> branch1 p2 -> branch2 You still have to use >>=, but you don't have to name the scrutinee (and names are expensive cognitively). Stefan

mytransaction = foo `liftM` r xvar0 `ap` r xvar1 .. where r = readTVar
I really find it difficult to articulate why this isn't acceptable, because it seems so obvious to me! It's short yes, but I really don't think it's very clear...
if it is any consolation, i don't use that style myself (yet?-). but it is a useful stepping stone on a path that seems to go somewhat like this: - explicit do-notation with flattened parameters - explicitly defined lifted operations - liftMn, for on-the-spot lifting - liftM/ap (avoiding need for infinitely many liftMn) - idioms http://www.cs.nott.ac.uk/~ctm/Idiom.pdf - idiom brackets - .. ?-)
I have a hard time believing that anyone finds that natural. After lots and lots of mind-bending forays into various branches of mathematics, then yes maybe you can get used to it, but it's hardly as natural as saying "add this one symbol to your values to extract monadic values left-to-right".
what makes this unnatural to me is that it is built-in syntax, which not only interacts badly with haskell's general abstraction facilities, but is outside the programmers' control. once we've figured out what we want, programatically, then putting a nice syntax on top of it, that is syntactic sugar, but binding fairly complex syntax transformations to an innocent-looking syntax is not. perhaps a good start would be syntactic sugar for idiom brackets, to rescue them from the complexities of type-level programming? at least, that would be a local transformation with well-explored semantics, similar to do-notation on top of >>=/return. if that doesn't work out, one might take another look at (<-). claus

Sebastian Sylvan wrote:
Claus Reinke wrote:
mytransaction = do { x0 <- readTVar xvar0 x1 <- readTVar xvar1 : xn <- readTVar xvarn return $ foo x0 x1 .. xn }
ah, a concrete example. but isn't that the typical use case for ap?
mytransaction = foo `liftM` r xvar0 `ap` r xvar1 .. where r = readTVar
I really find it difficult to articulate why this isn't acceptable, because it seems so obvious to me! It's short yes, but I really don't think it's very clear... I have a hard time believing that anyone finds that natural.
I think it's entirely natural :) Applicative functors (Control.Applicative) are the pattern behind this. The notation may seem a little weird first, but in the end, `ap` is a kind of explicit function application and similar to $. With the notation from Control.Applicative, the line return foo `ap` r xvar0 `ap` r xvar1 `ap` ... reads pure foo <*> r xvar0 <*> r xvar1 <*> ... or foo <$> r xvar0 <*> r xvar1 <*> ... In other words, instead of using juxtaposition to apply an argument to a function, we use <*>. The type of `ap` is ap :: m (a -> b) -> m a -> m b so that it can be thought of as a generalized function application where the function is "under" a monad. The difference to $ is that <*> is left associative and allows for currying. I.e. <*> is like $ used in the following way ((foo $ x0) $ x1) $ x2 Note that you can even incorporate the TVar by defining your own generalized function application: apT :: STM (a -> b) -> TVar a -> STM b apT f x = f `ap` readTVar x Then, mytransaction reads mytransaction = return foo `apT` xvar0 `apT` xvar1 `apT` ... Regards, apfelmus

Bulat Ziganshin wrote:
Hello apfelmus,
Saturday, August 4, 2007, 12:18:33 PM, you wrote:
Then, mytransaction reads
mytransaction = return foo `apT` xvar0 `apT` xvar1 `apT` ...
how about a+b*(c+d)?
That follows the same pattern, return (+) `apT` a `apT` (return (*) `apT` b `apT` (return (+) `apT` c `apT` d)) Bertram

I've heard Simon (Peyton-Jones) twice now mention the desire to be able to embed a monadic subexpression into a monad. That would be http://article.gmane.org/gmane.comp.lang.haskell.prime/2267 .. Thoughts?
what is the problem you're trying to solve, and is it worth the complication in syntax? in previous threads, the answer to the second questions seemed to be 'no', because there are easy workarounds (liftMn/return, or combinator-based lifting) and the extension would have non-local effects. what is particularly nasty about this extension is that it might be easy to add, but will interfere with just about everything else: it looks like an operator, and for tiny examples, it seems to have a local effect only, but it is really a piece of static syntax distributed widely over parts of a dynamic expression; the special quoting cannot be understood locally, as it is -namelessly- bound to the _next_ enclosing 'do', thereby complicating local program transformations, by tools or users. why is the syntax even bound to do (adding 'do's or switching from 'do' to '>>=' will change everything), and not to monadic operators (with lifting in place, there'd be more isolated monadic calls, without need for 'do')? wouldn't it be sufficient to lift the parameter out of the next enclosing call (and isn't that what the no-syntax alternatives already provide)? and what is the precise specification/what happens with more complex examples? more helpful than an immediate implementation, imho, would be a wiki page formalising the proposed extension and discussing the alternatives with pros and cons. perhaps there are lifting operations that are missing (eg, liftMn lifts non-monadic functions, but how to lift monadic functions with non-monadic parameters?), or perhaps the combinators that enable lifting of complete calls (rather than functions) could be simplified; this issue trips up enough people that it is worth investigating what the real show-stoppers are, or why the workarounds are not more widely used/known. but in the end, i'd expect the no-syntax route to be just as convenient, and less problematic in this case. claus

to illustrate why some of us are concerned about this extension, a few examples might help. consider: f (g (<- mx)) does this stand for (a) mx >>= \x-> f (g x) (b) f (mx >>= \x-> (g x)) (c) none of the above, because there's no do (d) something else entirely if (a/b), does the decision depend on the type of g (if g is pure, then (a), if g is monadic itself, then (b))? if (d), what? if (a/b), then this is no longer preprocessing, but depends on the types, which means that the type system must work in the presence of this extension, rather than its pre-processed form. if you want to avoid that, you're left with (c), but is that any better? if (c), then the following are no longer equivalent 1. return ... 2. do return ... in particular, do return .. is no longer a unit of the monad (in (a/b), even return .. isn't). so if you write f (do g (<- mx)) you mean (b), while if you write do f (g (<- mx)) you mean (a), and if you write f (g (<- mx)) you mean either an error, if there is no surrounding 'do', or something else, if there is a surrounding 'do'. and woe to those who think they can insert some 'do' notation whereever they like - they need to check the subexpressions for (<-) first! now, consider nesting monadic subexpressions: f (<- g (<- mx)) surely means the same as f =<< (g =<< mx), namely mx >>= \x-> g x >>= \gx-> f gx right? wrong! we forgot the 'do'. without a 'do'-context, this means nothing in (c). so if you have do .. fx <- f (<- g (<- mx)) .. fx <- f (<- g (<- mx)) .. and there are no free variables, then you can do the usual sharing to improve readability, right? let fgmx = f (<- g (<- mx)) in do .. fx <- fgmx .. fx <- fgmx .. wrong again! this is syntax, not expression, so the latter variant changes the scope the (<-)s refer to (some outer 'do', if one exists). you could have written do let fgmx = f (<- g (<- mx)) .. fx <- fgmx .. fx <- fgmx .. perhaps, and at this stage you might no longer be surprised that do and let no longer commute. or were you? if you weren't, here's a quick question: we've already seen the left- and right-identity laws in danger, so what about associativity? do { do { a; b}; c } is still the same as do { a; do { b; c } } yes? no? perhaps? sometimes? how long did it take you? could someone please convince me that i'm painting far too gloomy a picture here?-) claus

| f (g (<- mx)) | | does this stand for | | (a) mx >>= \x-> f (g x) | (b) f (mx >>= \x-> (g x)) | (c) none of the above, because there's no do | (d) something else entirely For me the answer is definitely (c). Furthermore there must be no lambda between the "monadic splice" and the "do". Given that, I think the meaning of a monadic splice is straightforward, and all your excellent questions have easy answers. The question remains of whether or not it's valuable. Simon

Simon Peyton-Jones
Furthermore there must be no lambda between the "monadic splice" and the "do".
I'm curious about this. One could sugar: do tax <- getTax return $ map (\price -> price * (1 + tax)) bill into: do return $ map (\price -> price * (1 + (<- getTax))) someNums Do you not think this is desirable? Is there a negative side-effect that I'm not noticing? I sort of see this in the same boat as Neil's example with if/then/else. The meaning may not be precisely what you'd expect... but mind-reading is hard, and it's more consistent to just say "find the innermost containing do block" than make up new rules for each piece of syntax. Granted, a special case of "it's an error" is far more appealing than the corresponding special case for if; but I don't yet see a reason for this exception to the rule either. -- Chris Smith

Claus Reinke
to illustrate why some of us are concerned about this extension, a few examples might help.
Claus, I've been saving your message in order to respond to it when I have the time to look over it in detail. I don't think there will be forthcoming answers from me, though. Ultimately, it may just have to come down to implementing the extension, making it available as an extension to GHC and perhaps other Haskell compilers, and then learning from people's experience. If there is a really good syntax that avoids the need for language changes, that would be great. If there's one that's clearly good enough and pops up before I finish this, then I may even abort the work. As it stands, though, I'm just not sure how to evaluate ideas without language changes against an alternative that doesn't exist. This is especially true when we're talking about non-quantifiable ideas like convenience, readability, and intuitiveness. As such, I'm happy to pursue the language change route, so that we'll have a real implementation and a fully developed idea, instead of a theory to discuss. I suspect it will then be more productive to talk about the options, such as whether the language change is really needed or beneficial. Neil and I just discussed some of the semantic issues you raise here in another subthread. Some of them are not quite as intuitive as I'd like, but the meaning is at least well-defined. As for this thread, yes I agree with Simon that it's necessary to choose your "option c" and tie any new syntax rather tightly to the 'do' keyword; anything else involves becoming a mind-reader.
if (c), then the following are no longer equivalent
1. return ... 2. do return ...
Yes, that is true.
if you weren't, here's a quick question: we've already seen the left- and right-identity laws in danger, so what about associativity?
do { do { a; b}; c }
is still the same as
do { a; do { b; c } }
yes? no? perhaps? sometimes? how long did it take you?
I'm not entirely sure I understand the point here. The monad laws are defined in terms of >>= and return. They have never had anything to do with do, let, or <-. All of the monad laws still hold. -- Chris Smith

Hi
do { do { a; b}; c }
is still the same as
do { a; do { b; c } }
yes? no? perhaps? sometimes? how long did it take you?
I'm not entirely sure I understand the point here. The monad laws are defined in terms of >>= and return. They have never had anything to do with do, let, or <-. All of the monad laws still hold.
The Monad laws have never been defined in terms of do notation, but they have always held with do notation since it was simply basic sugar for >> and >>=. Now do notation is no longer as simple, and the laws do not hold on do, only on the desugared version. We have lost the ability to manipulate do quite as easily, and gained a more compact expression of monadic actions. I think the trade off is worth it, but others may not. Thanks Neil

Chris Smith wrote:
I'm primarily interested in the two cases where one simply has no choice about the use of monads: and those are IO and STM. No, this is not purely functional programming then; but it has some very compelling advantages to Haskell's implementation of these, that I'm afraid are currently quite hidden behind the difficult syntax. Using something besides a monad is simply not an option.
"The more tiresome monads are, the more incentive you have to avoid them." Unfortunately, I'm afraid this cheapens work people are doing in making the necessary imperative parts of Haskell more useful and interesting. Making monads distasteful is not a reasonable goal.
Can you think of a single widely used programming language that forces programmers to write linear one-line-per-operation code like this? IMO, Haskell gets away with this because STM and IO stuff isn't very common; and STM and IO will remain uncommon (and will instead be replaced by unsafe code written in Python or Ruby) as long as this is the case.
I mean it in the following way: the power of Haskell is that a large core of pure functions does the actual algorithmic work and is surrounded by a small layer of imperative code. It's not possible to avoid the small layer of imperative code, of course. But the more you treat imperative code as somewhat pure, the greater the danger that the purely functional logic will be buried inside a mess of imperative code. In other words, the goal is exactly to make IO and STM uncommon, otherwise you loose the power the purely functional approach offers. What I want to say is: if you feel the urge to use the monad splicing syntax, then I think that there's a big chance that the code you write is in essence pure and can also be made completely pure. That's why I'd like to see the code that made you give up. It may require much more pondering to find a pure abstraction to the programming problem at hand. But once found, it bests any ad-hoc code. For example, take the HGL (Haskell Graphics Library) which actually shows the boundary between pure and monad. The main abstraction is the type Graphic that represents a graphic to be drawn on the screen. It's implemented with a monad Draw a with in turns does IO to draw itself on the screen. But the abstraction is to treat this as a purely functional value with operations like emptyGraphic :: Graphic polygon :: [Point] -> Graphic overGraphic :: Graphic -> Graphic -> Graphic to create and compose graphics. To draw a graphic, you have to use IO. But his is no reason not to offer a pure abstraction even if the internals are littered with IO. HGL still exports the Draw monad type Graphic = Draw () and I consider this a sin. It only appears as monad in the three functions selectBrush :: Brush -> Draw Brush selectPen :: Pen -> Draw Pen selectFont :: Font -> Draw Font which exist to enable the user to hand-optimize a bit since brush, font and pen creation is expensive on Win32. But arguably, these optimizations can be performed automatically under the hood. An interesting example of how a purely functional data structure makes life much easier is described in N. Ramsey and J. Dias. An Applicative Control-Flow Graph Based on Huet's Zipper http://www.eecs.harvard.edu/~nr/pubs/zipcfg-abstract.html <abstract>We are using ML to build a compiler that does low-level optimization. To support optimizations in classic imperative style, we built a control-flow graph using mutable pointers and other mutable state in the nodes. This decision proved unfortunate: the mutable flow graph was big and complex, and it led to many bugs. We have replaced it by a smaller, simpler, applicative flow graph based on Huet's (1997) zipper. The new flow graph is a success; this paper presents its design and shows how it leads to a gratifyingly simple implementation of the dataflow framework developed by Lerner, Grove, and Chambers (2002).</abstract> That being said, it is of course desirable to be able to describe genuinely imperative behavior like resource (de-)allocation elegantly in Haskell. Not everything can be pure :) (or rather :( ). But I'm not sure whether the linearization imposed is really an issue then.
Ultimately, it may just have to come down to implementing the extension, making it available as an extension to GHC and perhaps other Haskell compilers.
As it stands, though, I'm just not sure how to evaluate ideas without language changes against an alternative that doesn't exist.
Hm, it seems slightly unfair to me to leave the burden of searching for an alternative to somebody else.
I can explain desugaring rules for this idea in a short paragraph. The alternatives all seem to involve operators and functions that I've not used in about six months or more of moderate playing around with Haskell.
In fact, applicative functors are a very useful and powerful abstraction and to some extend, they exactly solve the problem of programming with monads in an applicative style. I would be sad if you'd ignore them in case they solve your STM-code problem without compiler extension. Regards, apfelmus

Hello apfelmus, Saturday, August 4, 2007, 12:22:53 AM, you wrote:
avoid the small layer of imperative code, of course. But the more you treat imperative code as somewhat pure, the greater the danger that the purely functional logic will be buried inside a mess of imperative code. In other words, the goal is exactly to make IO and STM uncommon, otherwise you loose the power the purely functional approach offers.
it's point of view of theoretical purist. i consider Haskell as language for real world apps and need to write imperative code appears independently of our wishes. in paricular, it's required to write very efficient code, to interact with existing imperative APIs, to make programs which has explicit memory control (as opposite to lazy evaluation with GC) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Bulat Ziganshin wrote:
apfelmus wrote:
avoid the small layer of imperative code, of course. But the more you treat imperative code as somewhat pure, the greater the danger that the purely functional logic will be buried inside a mess of imperative code. In other words, the goal is exactly to make IO and STM uncommon, otherwise you loose the power the purely functional approach offers.
it's point of view of theoretical purist. i consider Haskell as language for real world apps and need to write imperative code appears independently of our wishes. in paricular, it's required to write very efficient code, to interact with existing imperative APIs, to make programs which has explicit memory control (as opposite to lazy evaluation with GC)
No and yes. As I said, it is of course desirable to be able to describe genuinely imperative behavior elegantly in Haskell, like explicit memory control or concurrently accessing a bank account. However, most "genuinely imperative" things are often just a building block for a higher level functional model. The ByteString library is a good example: the interface is purely functional, the internals are explicit memory control. It's a bad idea to let the internal memory control leak out and pollute an otherwise purely functional program with IO-types. Also, many "genuinely concurrent" things just aren't. An example are UNIX pipes like say cat Main.hs | grep "Warm, fuzzy thing" The OS creates a processes for "cat" and "grep" running concurrently and "cat" passes a stream of characters to "grep". By blocking on the reader and the write side, "grep" reads what "cat" writes in real-time. Well, but that's just good old lazy evaluation! Regards, apfelmus

Hello apfelmus, Wednesday, August 8, 2007, 11:33:41 AM, you wrote:
it's point of view of theoretical purist. i consider Haskell as language for real world apps and need to write imperative code appears independently of our wishes. in paricular, it's required to write very efficient code, to interact with existing imperative APIs, to make programs which has explicit memory control (as opposite to lazy evaluation with GC)
No and yes. As I said, it is of course desirable to be able to describe genuinely imperative behavior elegantly in Haskell, like explicit memory control or concurrently accessing a bank account.
However, most "genuinely imperative" things are often just a building block for a higher level functional model.
you say about some imaginary ideal world. i say about my own experience. i write an archiver which includes a lot of imperative code. another my project is I/O library which is imperative too. in both cases i want to make my work easier -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

apfelmus wrote:
However, most "genuinely imperative" things are often just a building block for a higher level functional model. The ByteString library is a good example: the interface is purely functional, the internals are explicit memory control. It's a bad idea to let the internal memory control leak out and pollute an otherwise purely functional program with IO-types.
Also, many "genuinely concurrent" things just aren't. An example are UNIX pipes like say
cat Main.hs | grep "Warm, fuzzy thing"
The OS creates a processes for "cat" and "grep" running concurrently and "cat" passes a stream of characters to "grep". By blocking on the reader and the write side, "grep" reads what "cat" writes in real-time. Well, but that's just good old lazy evaluation! True, but the above program is just a trivial transformation from input to output ie just using the computer as a calculator. For interactive
Hi Apfelmus, This is a really interesting discussion that touches on issues I'm currently working with (I'm designing a strict version of Haskell to explore various ideas about modules, namespace management, and how to get really efficient machine code but without losing the convenience of accurate garbage collection etc, but I'm having difficulties deciding between the monadic path and the "impure" path), so I've forked this new thread. Regarding the quote above, if the API must hide explicit memory control from the user the only way I can see of doing this would be to use (unsafePerformIO), which really is unsafe since Haskell relies on the fact that mutable operations can't escape from the IO monad in order to get away with not having to impose a value restriction as in ML. If you don't use (unsafePerformIO), then the slightest need for mutable data structures pollutes the entire interface. For example in the excellent paper you quoted N. Ramsey and J. Dias. An Applicative Control-Flow Graph Based on Huet's Zipper http://www.eecs.harvard.edu/~nr/pubs/zipcfg-abstract.html http://www.eecs.harvard.edu/%7Enr/pubs/zipcfg-abstract.html the authors are pleased to have found an "Applicative" solution, and indeed their solution has many useful and instructive aspects. However on page 111, hidden away in the definition of their API function to create a label, is a call to (ref 0) !!!! ;-) The equivalent implementation in Haskell would completely destroy all hope of using this in a pure context and force all use of the API into the IO monad. Haskell and ML seem to stand at opposite poles. Haskell is designed so that any attempt at abstracting mutable local state will infect the entire program (modulo use of a highly dangerous function whose semantics is entirely unclear, depending on the vagaries of evaluation strategy of the particular compiler) whereas people have been struggling in the ML community for at least 15 years to try and find a way to hide the fact that mutable storage is being used (cf attempts to find a proper solution to the unsoundness introduced by polymorphism and ref without having to use imperative/weak type variables etc). Meanwhile, C++, C#, and Java programmers get a type system (thinking only about static methods using generics/templates) that seems to me no less powerful than that of the prenex polymorphism of ML, yet without any of the unsoundness problems, and therefore without the need of a value restriction (afaiu this is because their template/generic definitions stand in for an infinite family of monomorphic bindings instead of ML which tries unsuccessfully to make one piece of memory represent each element of an infinite family of values simultaneously). Not only this, but there seems to me to be many problems for which it is natural to think in terms of objects with identity and mutable state. I can readily discard the concepts of deep inheritance hierarchies and open recursion but this still leaves the problem of identity and localised mutability. For example consider a simple GUI with 2 edit widgets, a splitter, and a single buffer that contains the text being edited. The idea is that you should be able to use either edit widget to interact with the same buffer eg to edit at different locations in the buffer. A simple imperative solution might be something like: main = do buffer <- createBuffer edit1 <- createEdit buffer edit2 <- createEdit buffer splitter <- createSplitter (wrapWidget edit1) (wrapWidget edit2) runMessageLoopWith splitter Here it's really clear what's happening, and different objects in the program correspond exactly to how I think about what I'm trying to do ie I want to create individual objects with identity and then plug them together. I don't want to have to bother about passing state around, or having to define a new data type every time I want a different configuration of widgets. Thus the ability to abstract mutable state gives to my mind by far the best solution. In contrast, all the pure functional GUIs that I've seen are just wrappers around someone else's imperative code, and moreover, they exchange the simplicity of the object oriented imperative API for a veritable mindstorm of unbelievably heavy, clunky, slow, inefficient, inextensible, hard to understand encodings that seem to me to have the effect of turning a high level language into some kind of circuit board (I'm thinking of arrows etc). Indeed everyone seems to be using either WxWidgets or Gtk2Hs which kind of proves my point that in this domain at least imperative solutions are generally simpler than functional ones... programs you need to be able to implement a different kind of laziness, because the challenge is not just how to compute some output from some input, but how to maintain a mapping between input and output that respects some invariant in the presence of dynamic deltas to the input as the user enters information into the program, ensuring that the amount of computation done between each time the display is rendered is proportional to the delta rather than the entire input. So in summary for me the good things about Haskell are nothing to do with functional purity or laziness but are instead to do with the fact that it's basically the only statically typed modern language (apart from OCaml, MLton, and MLKit) that has a free compiler to unburdened native code (apart from the LGPL gnuMP hard wired into the runtime in ghc which is one reason I'm writing my own compiler so I can actually put my own programs on the web to sell...) and accurate garbage collection (otherwise I'd be happy to use C++). (The great type system, good syntax, well designed foreign function interface, ability to control use of APIs with great precision by using phantom type permissions in the monad(s), and of course millions of interesting research papers and discussions on this list are an extra bonus.) Summary of summary: Haskell is a good language for imperative programming, and the pure functional subset has failed to yield a practical GUI even after more than a decade of research. I've wasted at least a whole year of sleepless nights trying to work out how to represent an efficient GUI without using mutable data, and the feeling that there should be a pure solution made me abandon a perfectly workable imperative GUI that I started 18 months ago, that if I'd written it in any other language without this pure/impure conundrum, would have made me very happy with it. Best regards, Brian.

Brian Hulley wrote:
hidden away in the definition of their API function to create a label, is a call to (ref 0) !!!! ;-) The equivalent implementation in Haskell would completely destroy all hope of using this in a pure context and force all use of the API into the IO monad.
Really? I would have thought this is a job for the ST monad, in which case the API can be wrapped up in a runST or similar; no need for leaking IO monads. Or am I missing something? Regards, Martin My music: http://www.youtube.com/user/thetonegrove (please visit!)

Martin Percossi wrote:
Brian Hulley wrote:
hidden away in the definition of their API function to create a label, is a call to (ref 0) !!!! ;-) The equivalent implementation in Haskell would completely destroy all hope of using this in a pure context and force all use of the API into the IO monad.
Really? I would have thought this is a job for the ST monad, in which case the API can be wrapped up in a runST or similar; no need for leaking IO monads.
Or am I missing something?
Well I agree you're right on this particular use of a Ref, since their program is only dealing with a mapping from input to output so once they're finished using the data structure there is no longer any need for the ref and so the result can be returned to the rest of the program. However there are 2 problems with this approach in general afaics: 1) All code that uses this data structure ie that participates in the implementation of the transformations by using the API functions will have to be in a monad (ST or IO, it really doesn't matter in terms of all the perceived burdensomeness of do notation relative to normal applicative code). 2) The example doesn't transfer to an interactive situation, where we would need to keep the data structure around and use it forever - because we would be forever trapped inside the ST monad otherwise we'd lose that particular STRef which we were hoping to use to efficiently update the output in the face of a delta in the input. Corey - I found this page helpful to get an understanding of the value restriction in ML: http://www.smlnj.org/doc/Conversion/types.html The restriction was proposed by Andrew Wright in 1995: "Simple Imperative Polymorphism" by Wright http://citeseer.ist.psu.edu/wright95simple.html Some other related papers are: "The Type and effect discipline" by Talpin and Jouvelot 1992 "Standard ML-NJ weak polymorphism and imperative constructs" by Hoang, Mitchell, and Viswanathan "Weak polymorphism can be sound" by Greiner 1993 and more recently (2003) "Relaxing the value restriction" by Garrigue http://citeseer.ist.psu.edu/garrigue03relaxing.html (Note that even now there is still no real solution to it.) Best regards, Brian.

On 8/8/07, Brian Hulley
In contrast, all the pure functional GUIs that I've seen are just wrappers around someone else's imperative code, and moreover, they exchange the simplicity of the object oriented imperative API for a veritable mindstorm of unbelievably heavy, clunky, slow, inefficient, inextensible, hard to understand encodings that seem to me to have the effect of turning a high level language into some kind of circuit board (I'm thinking of arrows etc).
In defense of Haskell (wow!), note that imperative languages are not
without problems in GUIs. In a multithreaded environment, typically only one thread is allowed to manage the GUI, and then you typically need to set up some sort of message-passing system to communicate between this thread and the others AFAIK? This is a total PITA, so if someone has a solution for this that would rock :-) Question: to what extent do the Haskell wrappers around gtk and wxWidgets suffer from this problem? I mean, I havent tried them, so it's a genuine question. (Note: off the top of my head, in an imperative language, I guess one could use some sort of generator to take an interface and generate the message classes, and marshalling classes automatically) (Disclaimer: I havent really searched very hard for ways to handle threading in GUIs in imperative languages, since mostly I either use web pages as the visual interface, which avoids around the problem, or use a single thread, which also avoids the problem)

Hugh Perkins wrote:
On 8/8/07, *Brian Hulley*
mailto:brianh@metamilk.com> wrote: In contrast, all the pure functional GUIs that I've seen...
In defense of Haskell (wow!), note that imperative languages are not without problems in GUIs. In a multithreaded environment, If you're using multiple threads you'd need to be in the IO monad to create/manipulate the MVars or TVars or whatever. (In contrast to eg AliceML where you could easily use a synchronising logic variable without having to leave all the familiar applicative coding comforts behind to brave the fiercesome lonely toil of "Monadia" ;-))
typically only one thread is allowed to manage the GUI, and then you typically need to set up some sort of message-passing system to communicate between this thread and the others AFAIK? This is a total PITA, so if someone has a solution for this that would rock :-)
Question: to what extent do the Haskell wrappers around gtk and wxWidgets suffer from this problem? I mean, I havent tried them, so it's a genuine question. I don't know though I seem to recall some info on this on the website of Gtk2Hs.
(Note: off the top of my head, in an imperative language, I guess one could use some sort of generator to take an interface and generate the message classes, and marshalling classes automatically)
(Disclaimer: I havent really searched very hard for ways to handle threading in GUIs in imperative languages, since mostly I either use web pages as the visual interface, which avoids around the problem, or use a single thread, which also avoids the problem)
So far I've always managed to get away with just using a single threaded GUI. I think you run into problems with XLib and OpenGL (on GNU/Linux at least) if you try to call into those APIs from multiple threads and also it seems better to separate concerns by having all rendering code, including cacheing of geometry etc, in the same thread since it's easy enough to spawn new threads to do calculations and set a flag in the relevant widget when the result is complete... Best regards, Brian.

brianh:
Hugh Perkins wrote:
On 8/8/07, *Brian Hulley*
mailto:brianh@metamilk.com> wrote: In contrast, all the pure functional GUIs that I've seen...
In defense of Haskell (wow!), note that imperative languages are not without problems in GUIs. In a multithreaded environment, If you're using multiple threads you'd need to be in the IO monad to create/manipulate the MVars or TVars or whatever. (In contrast to eg AliceML where you could easily use a synchronising logic variable without having to leave all the familiar applicative coding comforts behind to brave the fiercesome lonely toil of "Monadia" ;-))
Or use purely functional channels (Chan). -- Don

On 8/8/07, Brian Hulley
Regarding the quote above, if the API must hide explicit memory control from the user the only way I can see of doing this would be to use (unsafePerformIO), which really is unsafe since Haskell relies on the fact that mutable operations can't escape from the IO monad in order to get away with not having to impose a value restriction as in ML.
My theory is weak. Can somebody point me the way to educate myself about the "value restriction" in ML? Thanks! -Corey -- -Corey O'Connor

Brian Hulley wrote:
Haskell is designed so that any attempt at abstracting mutable local state will infect the entire program (modulo use of a highly dangerous function whose semantics is entirely unclear, depending on the vagaries of evaluation strategy of the particular compiler)
(Your email message is long and very interesting, and it does an a considerable injustice to take one sentence out of context, but...) This echoes a misconception that I see here on haskell-cafe quite often. Mutable local state *really* doesn't need to infect the whole program, and haskell is certainly not designed so it must. We have all kinds of techniques for ensuring that the pure parts of your code can remain pure, and only the impure parts get 'infected' with an IO signature. Additionally, if it's just refs, you can use ST, which is totally pure. If it's literally just state, you can use the techniques of the State monad and the Reader monad: but you don't necessarily have to use them explicitly with those names. Sometimes it is actually simpler just to use the types s -> (a,s) and s -> a directly; only in certain circumstances is the extra plumbing useful. Often different parts of your program have different needs; some parts actually need the ability to make fresh names (and so need STRefs) other parts merely read the state (and can use Reader techniques) and other parts alter it (and can use State techniques). You need some plumbing to connect the different parts together, but fortunately haskell has powerful abstraction and it's quite easy to slap together the higher-order functions (combinators!) to do this. Jules

Brian Hulley schrieb:
apfelmus wrote:
However, most "genuinely imperative" things are often just a building block for a higher level functional model. The ByteString library is a good example: the interface is purely functional, the internals are explicit memory control. It's a bad idea to let the internal memory control leak out and pollute an otherwise purely functional program with IO-types.
Regarding the quote above, if the API must hide explicit memory control from the user the only way I can see of doing this would be to use (unsafePerformIO), which really is unsafe since Haskell relies on the fact that mutable operations can't escape from the IO monad in order to get away with not having to impose a value restriction as in ML.
Indeed, Data.ByteString makes heavy use of unsafePerformIO and inlinePerformIO. This is safe since it's just used for efficient memory access and (de-)allocation, the ByteStrings themselves are immutable.
If you don't use (unsafePerformIO), then the slightest need for mutable data structures pollutes the entire interface.
Well, any code that wants to mutate or read this data structure has to announce so in the type signature. However, it's debatable whether certain forms of "mutation" count as pollution. In fact, the simplest "mutation" is just a function s -> s . Haskell is throughly "polluted" by such "mutations": (3+) :: Int -> Int ([1,2]++) :: [Int] -> [Int] insert "x" 3 :: Map String Int -> Map String Int Of course, from the purely functional point of view, this is hardly perceived as mutation since the original value is not changed at all and still available. In other words, the need to "change" a value doesn't imply the need to discard (and thus mutate) the old one. Mutable data structures in the sense of ephemeral (= not persistent = update in-place) data structure indeed do introduce the need to work in ST since the old version is - by definition - not available anymore. This may be the right thing to do when the data structure is inherently used in a single-threaded fashion. However, most used-to-be ephemeral data structures have very good persistent counterparts in Haskell. In the end, the type just reflects the inherent difficulty of reasoning about ephemeral data structures. And that's what the quoted paper illustrates: persistent data structures are much easier to deal with.
For example in the excellent paper you quoted
N. Ramsey and J. Dias. An Applicative Control-Flow Graph Based on Huet's Zipper http://www.eecs.harvard.edu/~nr/pubs/zipcfg-abstract.html http://www.eecs.harvard.edu/%7Enr/pubs/zipcfg-abstract.html
the authors are pleased to have found an "Applicative" solution, and indeed their solution has many useful and instructive aspects. However on page 111, hidden away in the definition of their API function to create a label, is a call to (ref 0) !!!! ;-) The equivalent implementation in Haskell would completely destroy all hope of using this in a pure context and force all use of the API into the IO monad.
I don't know enough ML or have the background to judge whether this ref is really necessary, but I doubt that it can't be designed away.
Haskell is designed so that any attempt at abstracting mutable local state will infect the entire program
Depends on "local". In general, I think is a good thing. The type reflects how difficult your program really is, nothing more, nothing less. That's how it is: persistent data and prue functions are sooo much easier to reason about. Implicit side effects just sweep the difficulty under the carpet. (I imagine a tool that makes implicit side effects explicitly visible in the types of say C or ML programs. I guess that people would scream whole nights when seeing the output of this tool on their programs and thus discovering how complicated the code really is ... Well, maybe not since they're used to it during debugging anyway.) But if the state is really local, no infection of the entire program takes place! The best example is probably indeed the Haskell Graphics library. The are pure functions for constructing graphics over :: Graphic -> Graphic -> Graphic polygon :: [Point] -> Graphic and some IO-infected functions to draw those onto the screen drawInWindow :: Window -> Graphic -> IO () Now, Graphic may be implemented as an abstract data type and drawInWindow does the workload of interpreting it. Or, and that's how HGL currently implementes it, it can be an IO-action that encodes how to draw it type Graphics = Draw () ~= (Brush,Font,Pen) -> IO () That is, every graphic is "infested" with IO but that doesn't spread to the API. (It does a bit with selectBrush but that can be corrected.)
(modulo use of a highly dangerous function whose semantics is entirely unclear, depending on the vagaries of evaluation strategy of the particular compiler)
(yes, unsafePerformIO clearly isn't for ephemeral data structures.)
For example consider a simple GUI
Ah, the dreaded functional GUI problem. Yes, I agree that a good purely functional way of declaring a GUI has not been discovered yet, the signals and streams somehow miss something important.
I've wasted at least a whole year of sleepless nights trying to work out how to represent an efficient GUI without using mutable data, and the feeling that there should be a pure solution made me abandon a perfectly workable imperative GUI that I started 18 months ago, that if I'd written it in any other language without this pure/impure conundrum, would have made me very happy with it.
It indeed seems that the "mathematics" behind GUIs are inherently difficult and the easiest framework to declare GUIs _for now_ is an imperative one. That doesn't mean that a simpler one doesn't exist. Note that word _declare_: you don't want to mutate state a priori, you want to say what is displayed when and somehow describe the data dependencies. Once a domain specific language to declare GUIs is found, I'm sure that it can naturally be embedded in Haskell.
For example consider a simple GUI with 2 edit widgets, a splitter, and a single buffer that contains the text being edited. The idea is that you should be able to use either edit widget to interact with the same buffer eg to edit at different locations in the buffer. A simple imperative solution might be something like:
main = do buffer <- createBuffer edit1 <- createEdit buffer edit2 <- createEdit buffer splitter <- createSplitter (wrapWidget edit1) (wrapWidget edit2) runMessageLoopWith splitter
Here it's really clear what's happening, and different objects in the program correspond exactly to how I think about what I'm trying to do ie I want to create individual objects with identity and then plug them together. I don't want to have to bother about passing state around, or having to define a new data type every time I want a different configuration of widgets. Thus the ability to abstract mutable state gives to my mind by far the best solution.
I'm not sure whether mutable state is the real goodie here. I think it's the ability to indpendently access parts of a compound state. In other words, the IORef created by buffer is a part of the total program state but you can access it independently. There is a functional idiom for that, see also Sander Evers, Peter Achten, and Jan Kuper. "A Functional Programming Technique for Forms in Graphical User Interfaces". http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf
apfelmus wrote:
However, most "genuinely imperative" things are often just a building block for a higher level functional model.
Thanks to your response, I think I can better articulate what I mean: with "purely functional", I mean "declarative", i.e. the ability to write down equations of how different things interact with each other and thus to abstract away their implementation. For example, - For Graphics, I want to build a graphic from smaller ones and then draw it. I don't want to know how drawing is implemented and what mutable state might be involved. - For a GUI, I want to write down the data dependencies and a library converts this to a mesh of mutable state. That's what I mean with "higher level functional model". Syntactic sugar for applying monadic actions doesn't help with that. In fact, it intends to make it easier to write examples and miss the pattern/model behind. Likewise, allowing impure functions -> doesn't help with formulating or finding a model at all. Rather, it makes describing the model more error-prone. Of course, I want to implement the imperative machinery too. But most often, deriving it from the underlying model is straightforward. Regards, apfelmus

apfelmus wrote:
Brian Hulley schrieb:
main = do buffer <- createBuffer edit1 <- createEdit buffer edit2 <- createEdit buffer splitter <- createSplitter (wrapWidget edit1) (wrapWidget edit2) runMessageLoopWith splitter
... Thus the ability to abstract mutable state gives to my mind by far the best solution.
I'm not sure whether mutable state is the real goodie here. I think it's the ability to indpendently access parts of a compound state. In other words, the IORef created by buffer is a part of the total program state but you can access it independently. There is a functional idiom for that, see also
Sander Evers, Peter Achten, and Jan Kuper. "A Functional Programming Technique for Forms in Graphical User Interfaces". http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf
Thanks for this reference. This is indeed a real key to the problem. (Though a possible downside with compositional references might be efficiency as the modified sub-state needs to be injected back into a new composite state but perhaps the solution here would be to have uniqueness typing as in Clean so that these injections could hopefully be erased at compile time.) I think one of the issues with Haskell is that there are so many features to choose from it is difficult to know how to approach a problem eg for streams you can have 1) A lazy list 2) A typeclass with get and pushBack methods 3) An object using an existential to wrap (2) 4) A record containing get and pushBack methods 5) A monad with get and pushBack actions 6) A simple function wrapped in a newtype: newtype Stream a = Stream (() -> Maybe (a, Stream a)) and I tend to only discover a simple solution like (6) (which works equally well for both strict and lazy languages) after spending an enormous amount of time on 1,2,3,4,5... ;-)
- For Graphics, I want to build a graphic from smaller ones and then draw it. I don't want to know how drawing is implemented and what mutable state might be involved. - For a GUI, I want to write down the data dependencies and a library converts this to a mesh of mutable state.
That's what I mean with "higher level functional model". I agree this would be ideal. A challenge I don't yet know how to solve, when dealing with 3d graphics, is that it seems that for efficiency it is necessary to consider a mesh of triangles to be an object with identity in order to be able to display an updated mesh (eg as the user drags a vertex with the mouse) in real time. This is because the representation of a mesh is constrained by the low level details of the graphics system eg vertices might need to be represented by a contiguous array of unboxed positions and normals, and triangles by a contiguous array of vertex indices, and it is too expensive to copy these arrays on each frame. Perhaps though this is another case where some form of uniqueness typing as in Clean could come to the rescue so one could write:
createMesh :: [Vertex] -> [[VertIndex]] -> Mesh moveVertex :: Vertex -> *Mesh -> *Mesh instead of createMesh :: [Vertex] -> [[VertIndex]] -> IO Mesh moveVertex :: Vertex -> Mesh -> IO () Best regards, Brian.

Brian Hulley wrote:
apfelmus wrote:
Brian Hulley schrieb:
main = do buffer <- createBuffer edit1 <- createEdit buffer edit2 <- createEdit buffer splitter <- createSplitter (wrapWidget edit1) (wrapWidget edit2) runMessageLoopWith splitter
... Thus the ability to abstract mutable state gives to my mind by far the best solution.
I'm not sure whether mutable state is the real goodie here. I think it's the ability to indpendently access parts of a compound state. http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf
This is indeed a real key to the problem. Of course this is only one aspect of the problem...
Thinking about this a bit more, and just so this thought is recorded for posterity (!) and for the benefit of anyone now or in a few hundred years time, trying to solve "Fermat's last GUI", the object oriented solution allows the buffer object to do anything it wants, so that it could negotiate a network connection and implement the interface based on a shared network buffer for example, without needing any changes to the client code above, so a functional gui would need to have the same flexibility to compete with the OO solution. Another thing that would be interesting would be to have a formal treatment of what is supposed to happen in a gui. For example, when you move the mouse over a control which has become dirty (ie needs to be re-rendered because its state is now inconsistent), what should the control do? Should it respond as if the new state were already visible to the user, or should it interpret the mouse position according to the last state that was rendered, or should it just ignore all mouse events until the next time it gets rendered? This is not a trivial question because you could imagine an animated control where the user might naturally be following the movement, whereas when the user clicks on a cell in a spreadsheet when the cells to the left have now expanded due to a change in data thus moving the cell along (but where this updated model has not yet been re-rendered) the user might be irritated at the wrong cell being selected... It's tricky little issues like this that I haven't found any documentation for anywhere, and which would make a proper mathematical treatment of interaction with a gui very useful, regardless of whether it is implemented in OOP or functional style. Anyway just a thought, Brian.

Brian Hulley wrote:
Thinking about this a bit more, and just so this thought is recorded for posterity (!) and for the benefit of anyone now or in a few hundred years time, trying to solve "Fermat's last GUI", the object oriented solution allows the buffer object to do anything it wants, so that it could negotiate a network connection and implement the interface based on a shared network buffer for example, without needing any changes to the client code above, so a functional gui would need to have the same flexibility to compete with the OO solution.
Probably it would be parametric in the input mechanism, somehow. (A Haskell approach might use type classes, slightly obscuring the parametricity..)
Another thing that would be interesting would be to have a formal treatment of what is supposed to happen in a gui. For example, when you move the mouse over a control which has become dirty (ie needs to be re-rendered because its state is now inconsistent), what should the control do? Should it respond as if the new state were already visible to the user, or should it interpret the mouse position according to the last state that was rendered, or should it just ignore all mouse events until the next time it gets rendered? This is not a trivial question because you could imagine an animated control where the user might naturally be following the movement, whereas when the user clicks on a cell in a spreadsheet when the cells to the left have now expanded due to a change in data thus moving the cell along (but where this updated model has not yet been re-rendered) the user might be irritated at the wrong cell being selected... It's tricky little issues like this that I haven't found any documentation for anywhere, and which would make a proper mathematical treatment of interaction with a gui very useful, regardless of whether it is implemented in OOP or functional style.
Jef Raskin (late interface designer, author of _The Humane Interface_) would probably say that anything with such importance to user decisions, should be rendered within a tenth of a second. Computers fifteen years ago could sometimes do it! Fancy details can be filled in later if it takes that long. Of course that completely dodges the mathematical question... in which human response time should really be taken into account too! Humans really are not like machines and are not all alike either! Oh no, do we need psychological formalisms? Firefox suffers the above problems badly, alas - the "Stop" button is half useless because it doesn't even noticed you pressed it for such a long time, etc... Reading up on user interface design principles as well as thinking functionally, is probably a useful approach - although not everything that you read will agree or be right. The whole concept of GUIs - they are very complicated - it is quite arguable that they are just a wrong interface - however, some of the world's people are fortunate enough to be accustomed to them already, which complicates matters considerably. Isaac

Brian Hulley wrote:
Brian Hulley wrote:
apfelmus wrote:
Brian Hulley schrieb:
main = do buffer <- createBuffer edit1 <- createEdit buffer edit2 <- createEdit buffer splitter <- createSplitter (wrapWidget edit1) (wrapWidget edit2) runMessageLoopWith splitter
... Thus the ability to abstract mutable state gives to my mind by far the best solution.
I'm not sure whether mutable state is the real goodie here. I think it's the ability to indpendently access parts of a compound state. http://www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf
This is indeed a real key to the problem. Of course this is only one aspect of the problem...
Thinking about this a bit more, and just so this thought is recorded for posterity (!) and for the benefit of anyone now or in a few hundred years time, trying to solve "Fermat's last GUI", the object oriented solution allows the buffer object to do anything it wants, so that it could negotiate a network connection and implement the interface based on a shared network buffer for example, without needing any changes to the client code above, so a functional gui would need to have the same flexibility to compete with the OO solution.
I'd be careful. Introducing a network connection into the equation makes the object (its methods) susceptible to a whole new bunch of failure modes; think indefinite delays, connection loss, network buffer overflow, etc etc. It may be a mistake to abstract all that away; in fact I am convinced that the old Unix habit of sweeping all these failure modes and potentially long delays under a big carpet named 'file abstraction' was a bad idea to begin with. The ages old and still not solved problems with web browsers hanging indefinitely (w/o allowing any GUI interaction) while name resolution waits for completion is only the most prominent example. Cheers Ben

Benjamin Franksen wrote:
I'd be careful. Introducing a network connection into the equation makes the object (its methods) susceptible to a whole new bunch of failure modes; think indefinite delays, connection loss, network buffer overflow, etc etc. It may be a mistake to abstract all that away; in fact I am convinced that the old Unix habit of sweeping all these failure modes and potentially long delays under a big carpet named 'file abstraction' was a bad idea to begin with. The ages old and still not solved problems with web browsers hanging indefinitely (w/o allowing any GUI interaction) while name resolution waits for completion is only the most prominent example.
indefinite delays I can create with `someCommand | haskellProgram` too connection loss Is there a correct way to detect this? I find it rather odd when I lose my IRC connection for a moment and then it comes back a moment later (Wesnoth games are worse, apparently, as they don't reconnect automatically). I often prefer considering them an indefinite delay. network buffer overflow
IMO it's just a terribly stupid bug in the best web browsers. Maybe inefficient, poorly, or not-at-all-used multithreading? "file abstraction" has its points. We just need a (type-level?) clear-to-program-with distinction between operations that may block indefinitely, and operations that have particular bounds on their difficulty. Although, modern OSes try to balance too many things, don't usually make any such hard real-time guarantees, in favor of everything turning out more-or-less correct eventually. Back to "file abstraction" - well, considering the benefits of mounting remote systems as a filesystem. The hierarchy abstraction of the filesystem didn't stay the same performance characteristics... And all kinds of potential problems result too, when the connection breaks down! How do you program with all those error conditions explicitly? It is difficult. You need libraries to do it well - and I'm not at all sure whether there exist such libraries yet! I mean, programs are much too complicated already without infesting them with a lot of special cases. that is: too much input, not processing it fast enough? (or similar). Memory size limitations are considerably unhandled in programs of all sorts, not just networked ones, though they(networked) may suffer the most. We wish we had true unlimited-memory Turing machines :) ...this is possibly the most difficult issue to deal with formally. Probably requires limiting input data rates artificially. Isaac

Benjamin Franksen wrote:
I'd be careful. Introducing a network connection into the equation makes
object (its methods) susceptible to a whole new bunch of failure modes; think indefinite delays, connection loss, network buffer overflow, etc etc. It may be a mistake to abstract all that away; in fact I am convinced
Isaac Dupree wrote: the that
the old Unix habit of sweeping all these failure modes and potentially long delays under a big carpet named 'file abstraction' was a bad idea to begin with. The ages old and still not solved problems with web browsers hanging indefinitely (w/o allowing any GUI interaction) while name resolution waits for completion is only the most prominent example.
IMO it's just a terribly stupid bug in the best web browsers. Maybe inefficient, poorly, or not-at-all-used multithreading?
Explicitly creating a (system) thread with all the overhead (in computing resources, as well as code complexity) only because the system interface is broken? Yes, of course, necessary, but not nice. An extra parameter for a continuation would be a lot more light-weight and would also make explicit that we must expect the call to be delayed. I think the main reason why systems don't regularly employ this scheme is that it is so tedious to work with closures in low-level languages like C.
"file abstraction" has its points. We just need a (type-level?) clear-to-program-with distinction between operations that may block indefinitely, and operations that have particular bounds on their difficulty. Although, modern OSes try to balance too many things, don't usually make any such hard real-time guarantees, in favor of everything turning out more-or-less correct eventually. Back to "file abstraction" - well, considering the benefits of mounting remote systems as a filesystem. The hierarchy abstraction of the filesystem didn't stay the same performance characteristics... And all kinds of potential problems result too, when the connection breaks down!
Indeed, as I have experienced multiple times: NFS clients completely hanging for minutes, enforcing coffee break for the whole office! Not that I would mind a coffee break now or then, but it tends to happen in the middle of an attempt to fix a critical bug in the production system...
How do you program with all those error conditions explicitly? It is difficult. You need libraries to do it well - and I'm not at all sure whether there exist such libraries yet! I mean, programs are much too complicated already without infesting them with a lot of special cases.
What I would like to have is a clear distinction, apparent in the type, between actions that can be expected to terminate fast and with certainty (apart from broken hardware, that is) and others which are inherently insecure and may involve considerable or even indefinite delays. The latter should accept a continuation argument. However, there is obviously a judgement call involved here. Thus, the system should be flexible enough to allow to treat the same resource either as one or the other, depending on the demands of the application. There may be situations where a name lookup can safely be treated as a synchronous operation (e.g. a script run as a cron job); in other situations one might need to regard even local bus access to some I/O card as asynchronous.
indefinite delays I can create with `someCommand | haskellProgram` too
Yes, user input as well as reading from a pipe should be handled like a network connection: call my continuation whenever input is available.
connection loss Is there a correct way to detect this?
There are many ways, typically involving some sort of beacons. Anyway, if all else fails the operation times out.
I find it rather odd when I lose my IRC connection for a moment and then it comes back a moment later (Wesnoth games are worse, apparently, as they don't reconnect automatically). I often prefer considering them an indefinite delay.
Right: The user (the application) is in the best position to decide how long to wait before an operation times out.
network buffer overflow that is: too much input, not processing it fast enough? (or similar).
Yeah, though usually the other way around, i.e. too much output and the system can't send it fast enough (maybe because the other side is slow in accepting data, or because the connection is bad, or whatever).
Memory size limitations are considerably unhandled in programs of all sorts, not just networked ones, though they(networked) may suffer the most.
It is usually not a problem with modern desktop or server systems, rather with so called 'real-time' OSes, where everything tends to be statically allocated.
We wish we had true unlimited-memory Turing machines :) ...this is possibly the most difficult issue to deal with formally. Probably requires limiting input data rates artificially.
That's what one does (or tries to do, until some arbitrary network problem, ill-configured switch or whatever, slows down the network just a bit more and for just a bit longer than you imagined could happen and the system needs to be rebooted). Anyway, scrap the network buffer overflow, I mentioned it only because we've been bitten by it just today. Cheers Ben

On Tue, 14 Aug 2007, Benjamin Franksen wrote: ...
I'd be careful. Introducing a network connection into the equation makes the object (its methods) susceptible to a whole new bunch of failure modes; think indefinite delays, connection loss, network buffer overflow, etc etc. It may be a mistake to abstract all that away; in fact I am convinced that the old Unix habit of sweeping all these failure modes and potentially long delays under a big carpet named 'file abstraction' was a bad idea to begin with. The ages old and still not solved problems with web browsers hanging indefinitely (w/o allowing any GUI interaction) while name resolution waits for completion is only the most prominent example.
Ironically, the place where all this sweeping under the carpet has caused me personally the most irritation is one of the most appealing file abstractions - remote disk filesystems, NFS. In any case, I agree (I think) that a sophisticated user interface needs to deal with time. I think that's a key motivation for reactive object approaches. It has to be considered part of the equation, along with the rest of the I/O situation, if you're trying to reason about it that way. Donn Cave, donn@drizzle.com

apfelmus wrote:
(3+) :: Int -> Int ([1,2]++) :: [Int] -> [Int] insert "x" 3 :: Map String Int -> Map String Int
Of course, from the purely functional point of view, this is hardly perceived as mutation since the original value is not changed at all and still available. In other words, the need to "change" a value doesn't imply the need to discard (and thus mutate) the old one.
Yes, and pure functions in Haskell often get funny imperative-sounding names like "insert" because of it - which is quite nice IMO. I like perceiving it like mutation because 99% of the time these are used in the places that mutation normally needs to be used in imperative languages. It is only occasionally that destructive mutation (for lack of a better name) is needed - for all I know, those situations may be a named "pattern" or something in imperative languages. type Mutate a = a -> a --I've also caught myself calling it Mon, Endo, IdF, Change ... insert :: (Ord k) => k -> v -> Mutate (Map k v) It's annoying when the arguments are in the wrong order, such as Data.Bits.shift. (perhaps for the flimsy excuse that they expected you to use it infix...)
Mutable data structures in the sense of ephemeral (= not persistent = update in-place) data structure indeed do introduce the need to work in ST since the old version is - by definition - not available anymore.
Not in the quantum/information-theoretic sense, not necessarily. Consider import Control.Monad.ST import Data.STRef main = print (runST (do r <- newSTRef 1 notUnavailable <- readSTRef r writeSTRef r 5 return notUnavailable )) Of course that's something you can do in imperative languages too, but it's still easier in Haskell where you don't have to worry about what something implicitly refers to, and can pass around anything (any data, functions, IO-actions) as first-class citizens :) (including storing them in parametrically-polymorphic state-refs like STRef, and, even for non-polymorphic refs, you can get the value out and keep it after the mutatable state has changed) See, the imperative paradigm has trouble scaling down to the quantum level, where information cannot be copied at will, too! This proves why computers generate heat(entropy) from the unprincipled destruction of information. Of course, computation near the quantum scale is a subject that has not nearly been thoroughly explored yet, but I suspect that (purely) functional languages are a little more likely to be easier to compile to such a type of machine, some decades from now... Playfully, Isaac

Isaac Dupree schrieb:
apfelmus wrote:
Mutable data structures in the sense of ephemeral (= not persistent = update in-place) data structure indeed do introduce the need to work in ST since the old version is - by definition - not available anymore.
Not in the quantum/information-theoretic sense, not necessarily. Consider
import Control.Monad.ST import Data.STRef main = print (runST (do r <- newSTRef 1 notUnavailable <- readSTRef r writeSTRef r 5 return notUnavailable ))
I'm not sure what this has to do with quantum mechanics ;) but you're right, I forgot that. This means that either STRefs cannot be updated in-place or that every read operation copies the contents or something like that. In any case, simple values like Ints or Bools are rather uninteresting, update in-place is only important for larger structures like arrays. Here, ST does updates in-place and retaining an array will copy it. Regards, apfelmus

Hello Claus, Friday, August 3, 2007, 8:12:13 PM, you wrote:
f (g (<- mx))
does this stand for
(a) mx >>= \x-> f (g x)
this variant. just like any imperative language (are you used any?). idea of FORmula TRANslator is old and widely used enough to prevent such questions -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
participants (33)
-
apfelmus
-
Benjamin Franksen
-
Bertram Felgenhauer
-
Brandon Michael Moore
-
Brandon S. Allbery KF8NH
-
Brian Hulley
-
Bulat Ziganshin
-
Chris Smith
-
Claus Reinke
-
Corey O'Connor
-
Dan Doel
-
Dan Licata
-
Dan Piponi
-
Dan Weston
-
David Roundy
-
david48
-
Derek Elkins
-
Donn Cave
-
donsīŧ cse.unsw.edu.au
-
Hugh Perkins
-
Isaac Dupree
-
Josef Svenningsson
-
Jules Bean
-
Lutz Donnerhacke
-
Martin Percossi
-
Matthew Naylor
-
Miguel Mitrofanov
-
Mirko Rahn
-
Neil Mitchell
-
ok
-
Sebastian Sylvan
-
Simon Peyton-Jones
-
Stefan O'Rear