
Haskellians, Once you have a polymorphic let, why do you need 'let' in the base language, at all? Is it possible to formulate Haskell entirely with do-notation where there is a standard monad for let environments? Probably this was all discussed before in the design deliberations for the language standard. Pointers would be very much appreciated. Best wishes, --greg -- L.G. Meredith Managing Partner Biosimilarity LLC 505 N 72nd St Seattle, WA 98103 +1 206.650.3740 http://biosimilarity.blogspot.com

On 28 jun 2007, at 21.17, Greg Meredith wrote:
Once you have a polymorphic let, why do you need 'let' in the base language, at all? Is it possible to formulate Haskell entirely with do-notation where there is a standard monad for let environments? Probably this was all discussed before in the design deliberations for the language standard. Pointers would be very much appreciated.
let x = ... in ... is only equal do x <- ...; ... in the Identity monad. Also, why would "do" be more primitive than "let". That way you would have to use monads everywhere. Also, let is treated specially by the type checker (IIRC) and there are many, many other reasons not to do that. Why would you consider the syntactic sugar do { x <- e; .. } which is just a different way of writing function binding (e >>= \x -> ...) consider more primitive than "let"? / Thomas

Thomas,
Thanks for the reply. My thinking was that once you have a polymorphic form,
why single out any other? Less moving parts makes for less maintenance, etc.
Best wishes,
--greg
On 6/28/07, Thomas Schilling
On 28 jun 2007, at 21.17, Greg Meredith wrote:
Once you have a polymorphic let, why do you need 'let' in the base language, at all? Is it possible to formulate Haskell entirely with do-notation where there is a standard monad for let environments? Probably this was all discussed before in the design deliberations for the language standard. Pointers would be very much appreciated.
let x = ... in ...
is only equal
do x <- ...; ...
in the Identity monad. Also, why would "do" be more primitive than "let". That way you would have to use monads everywhere. Also, let is treated specially by the type checker (IIRC) and there are many, many other reasons not to do that.
Why would you consider the syntactic sugar do { x <- e; .. } which is just a different way of writing function binding (e >>= \x -> ...) consider more primitive than "let"?
/ Thomas
-- L.G. Meredith Managing Partner Biosimilarity LLC 505 N 72nd St Seattle, WA 98103 +1 206.650.3740 http://biosimilarity.blogspot.com

On 28 jun 2007, at 22.02, Greg Meredith wrote:
Thomas,
Thanks for the reply. My thinking was that once you have a polymorphic form, why single out any other? Less moving parts makes for less maintenance, etc.
Ok, sorry if my reply seemed harsh. You are of course right, that having few primitives is better. In Haskell you have two primives: function binding and let-binding. Let bindings are always recursive, thus let x = e in body =/= (\x -> body) e because x also is bound to itself in "e". Since, do-binding is defined in terms of normal lambda-binding, there are no more primitives. / Thomas PS: "let" is treated specially by the type-checker too. The technical term is "let-polymorphism", but I couldn't find any good results, using a quick google search. Hopefully, others will chime in.

Thomas,
let x = ... in ...
is only equal
do x <- ...; ...
in the Identity monad. Also, why would "do" be more primitive than "let". That way you would have to use monads everywhere. Also, let is treated specially by the type checker (IIRC) and there are many, many other reasons not to do that.
As you already hinted at in a later message, this has to do with let- bindings being potentially polymorphic and monadic bindings being necessarily monomorphic: import Control.Monad.Identity foo = let id = \x -> x in (id 'x', id 42) -- well-typed bar = runIdentity $ do id <- return (\x -> x) ; return (id 'x', id 42) -- ill-typed Cheers, Stefan

Thomas, Stefan,
Thanks for a most edifying exchange! i will reflect on this.
Best wishes,
--greg
On 6/28/07, Stefan Holdermans
Thomas,
let x = ... in ...
is only equal
do x <- ...; ...
in the Identity monad. Also, why would "do" be more primitive than "let". That way you would have to use monads everywhere. Also, let is treated specially by the type checker (IIRC) and there are many, many other reasons not to do that.
As you already hinted at in a later message, this has to do with let- bindings being potentially polymorphic and monadic bindings being necessarily monomorphic:
import Control.Monad.Identity foo = let id = \x -> x in (id 'x', id 42) -- well-typed bar = runIdentity $ do id <- return (\x -> x) ; return (id 'x', id 42) -- ill-typed
Cheers,
Stefan
-- L.G. Meredith Managing Partner Biosimilarity LLC 505 N 72nd St Seattle, WA 98103 +1 206.650.3740 http://biosimilarity.blogspot.com

On Jun 28, 2007, at 9:49 PM, Stefan Holdermans wrote:
That way you would have to use monads everywhere.
As you already hinted at in a later message, this has to do with let-bindings being potentially polymorphic and monadic bindings being necessarily monomorphic:
Are there papers that prove this need be the case in any language, or are we simply now trapped into this for the moment by some design choices? If monads weren't late to the party, with clumsy notation left over from the core language, would we really mind a language where one has "to use monads everywhere" ? The trick would be to design them in so seamlessly that a Perl/Python/Ruby scale mainstream _thought_ they were just using another imperative language, that happened to scale _very_ gracefully to eight core machines and such because of the underlying semantics. Compare a Haskell do statement to a Lisp expression. In the Lisp expression the first element is special, a convention of the eval loop. In the Haskell do expression, every line is equally special, and type information is used to combine the lines, inserting implied combinators. This is a fascinating variation on Lisp rules, alas restricted to a couple of situations, akin to the special syntax for tuples and lists. I see potential for a whole language that worked this way, opened up to let the programmers control this process without waiting for an implementation to take their suggestions (think history of arrows) piecemeal. Here's a different analogy: People like Haskell parsing because it looks so much like the definition of the grammar itself. So step down a rung, to regular expressions, and imagine a toy language where a regular expression _IS_ by itself, with no further ado, a fully working program. One is immediately led back to the same idea as Haskell do expressions: Two pieces of program, juxtaposed next to each other, silently "multiply" to combine into a larger program, with type rules guiding the multiplication process. That sure reads to me like a description of how do statements work in Haskell, so unleash them?

Dave Bayer wrote:
[...] In the Haskell do expression, every line is equally special, and type information is used to combine the lines, inserting implied combinators.[...]
Desugaring do-notation is a syntactic transformation, requiring no type information. (In practice, the parts may be required to have a monadic type, but this is only to get an earlier (hence better) error message, I guess.)
I see potential for a whole language that worked this way, opened up to let the programmers control this process without waiting for an implementation to take their suggestions (think history of arrows) piecemeal.
How would you propose to specify such transformations? Greetings, Arie

On Fri, 29 Jun 2007, Dave Bayer wrote:
One is immediately led back to the same idea as Haskell do expressions: Two pieces of program, juxtaposed next to each other, silently "multiply" to combine into a larger program, with type rules guiding the multiplication process.
They don't, there's a ; between them which may or may not have been inserted by the layout rule. -- flippa@flippac.org The task of the academic is not to scale great intellectual mountains, but to flatten them.

On 29 jun 2007, at 16.26, Dave Bayer wrote:
That way you would have to use monads everywhere.
As you already hinted at in a later message, this has to do with let-bindings being potentially polymorphic and monadic bindings being necessarily monomorphic:
Are there papers that prove this need be the case in any language, or are we simply now trapped into this for the moment by some design choices?
The big design choice is to have non-strict evaluation semantics. Monads re-sequence instructions for cases where you need it. Recall that in Haskell let x = foo bar baz y = error "Gotcha." in (x, y) isn't equivalent to (let ((x (foo bar baz)) (y (error "Gotcha."))) (values x y) because Lisp is strict. In Lisp this would result in an error, even if y is never used, in Haskell only once y is actually used. To simulate this in Haskell you'd have to write: do x <- foo bar baz y <- error "baz" return (x, y) and choose the monad according to your semantics. I take it, your claim now is that by choosing the Identity monad, we'd get normal Haskell semantics (modulo polymorphic types?) and using any other monad we'd get any other semantics? Some problems I can see with this is: - Monads aren't transparent to the compiler. The compiler would still have to transform it into a pure intermediate form. - Most importantly, monads sequence computation. But I guess you can get around it. After all, you can simulate Haskell in strict lisp via: (let ((x (delay (foo bar baz))) (y (delay (error "Gotcha")))) (delay (values x y))) - So, I assume the big question is how not to lose type inference, and get used to the less pretty syntax ;) Maybe, others can comment on these issues. / Thomas

On Fri, Jun 29, 2007 at 07:26:21AM -0700, Dave Bayer wrote:
On Jun 28, 2007, at 9:49 PM, Stefan Holdermans wrote:
That way you would have to use monads everywhere.
As you already hinted at in a later message, this has to do with let-bindings being potentially polymorphic and monadic bindings being necessarily monomorphic:
Are there papers that prove this need be the case in any language, or are we simply now trapped into this for the moment by some design choices?
Indeed, it is a requirement of the HM type inference system which haskell is based on (though, it goes well beyond HM in a lot of ways) Monads are in no way intrinsic to the language, they arn't part of core to begin with but simply sugar for applications and lambda bindings. So, asking whether we can drop 'let's in favor of 'do's is sort of meaningless, dos are just translated away immediately, before any typechecking. do x <- f y; z is equivalent to f y >>= \x -> z notice that the 'x' is bound by a lambda binding, the tradeoff made in the HM type system is that generalization (creating polymorphic types) only occurs on let bindings, not lambda bound ones. (ghc haskell has extensions that can lift this rule, at the expense of needing user specified types in some circumstances) monads are not core haskell, haskell just happens to be a really elegant language for expressing them, and they happen to be useful enough to haskell programmers that some nice syntatic sugar (do) is provided for convenience but that is as far as the relationship goes. That said, there is a precident (probably many, but this is the one I know) for a language whose core is based on monads rather than the lambda calculus, namely 'GRIN' the back end language used by jhc. However, the cost (and benefit) of this is that grin is first order, you cannot have closures or higher order functions, (you can think of it as C but with a true pure functional type system and all the goodness that implies). I have a toy mini-language called 'undo' (unboxed do) which I use sometimes to write things directly in it which might be kind of neat to expose to the haskell programmer one day... in any case, this isn't really relevant to your question, but speaking generally, no, monads are not core haskell, yes monads can be used as the core of a language, jhc actually does this, but not til very far down the line and it has been transformed enough that I wouldn't consider it core haskell. (in particular, the use of monads in grin have no coorespondence to the use of monads in the original haskell source) (I am being sloppy with my use of 'core' here... we need some more words) John -- John Meacham - ⑆repetae.net⑆john⑈

On Jun 28, 2007, at 12:17 PM, Greg Meredith wrote:
Haskellians,
Once you have a polymorphic let, why do you need 'let' in the base language, at all? Is it possible to formulate Haskell entirely with do-notation where there is a standard monad for let environments? Probably this was all discussed before in the design deliberations for the language standard. Pointers would be very much appreciated.
Best wishes,
--greg
I've been wondering the same thing. When I want a break from coding, I rewrite files in an imaginary language to see what I want, what comes naturally. Let is the first keyword to go; a binding on one line followed by an expression on the next ought to imply a let/in combination. Looking at special "do" language support for monads and arrows reminds me of special language support for tuples and lists. While it would probably be painful to lose all syntactic sugar, I would prefer a uniform mechanism for supporting any future construct like monads and arrows, so "adding language support" isn't restricted to the implementors. Monads and arrows are particular instances of a general functional programming idiom, and seeing how preciously they are treated reminds me of the early history of mathematical group theory, when people treated each of the few groups they knew as a one-off special case. No programming language should treat monads and arrows this way. I'm struck by the "readability" requirement that leads to explicit
= syntax, or "adding language support". Readability should be a compiler option: You can't read someone else's code or your own weeks later? Have the compiler massively annotate the type information back into the code, supplying implied combinators, to a web page you can carefully study.
If one gets over a requirement that raw code be readable, then all sorts of combinators can be implied. Using type information, the compiler would be able to notice that two successive lines of code make no sense at all in sequence, but WOULD make perfect sense if a
= was inserted. This is roughly what a do statement does, except a do statement does this in an ad hoc fashion for a very few combinators.
Rather than having a short ad-hoc list of operators inserted by do, analogous to the short ad-hoc list of syntactic sugar for tuples and lists, one could have a general class mechanism for inserting arbitrary combinators. This would get confusing to read, but a compiler that could annotate code with explanations of what it did would help. Now, we're instead forced to write these annotations manually, and stare at them all of the time.
participants (7)
-
Arie Peterson
-
Dave Bayer
-
Greg Meredith
-
John Meacham
-
Philippa Cowderoy
-
Stefan Holdermans
-
Thomas Schilling