
Hello. http://www.haskell.org/onlinereport/exps.html#sect3.14 a obscure (to me) note which says "As indicated by the translation of do, variables bound by let have fully polymorphic types while those defined by <- are lambda bound and are thus monomorphic." What actually does it mean? And, also, would it make any difference if do {p <- e; stmts} = let ok p = do {stmts} ok _ = fail "..." in e >>= ok is redefined as "e >>= (\p -> do {stmts})"? Thanks, Alexander

On Mon, 13 Sep 2010, Alexander Kotelnikov wrote:
Hello.
http://www.haskell.org/onlinereport/exps.html#sect3.14 a obscure (to me) note which says
"As indicated by the translation of do, variables bound by let have fully polymorphic types while those defined by <- are lambda bound and are thus monomorphic."
What actually does it mean?
It means that variables bound by let, may be instantiated to different types later.
And, also, would it make any difference if
do {p <- e; stmts} = let ok p = do {stmts} ok _ = fail "..." in e >>= ok
is redefined as "e >>= (\p -> do {stmts})"?
It would not make a difference because the (>>=)-expression is what the do-expression is expanded to.

On 09/13/2010 12:23 PM, Michael Lazarev wrote:
2010/9/13 Henning Thielemann
: It means that variables bound by let, may be instantiated to different types later.
Can you give an example, please?
testOk = let f = id in (f 42, f True) --testNotOk :: Monad m => m (Int, Bool) --testNotOk = do f <- return id -- return (f 42, f True) Try uncommenting the 'testNotOk' definition.

On 13 Sep 2010, at 10:28, Gleb Alexeyev wrote:
On 09/13/2010 12:23 PM, Michael Lazarev wrote:
2010/9/13 Henning Thielemann
: It means that variables bound by let, may be instantiated to different types later.
Can you give an example, please?
testOk = let f = id in (f 42, f True)
--testNotOk :: Monad m => m (Int, Bool) --testNotOk = do f <- return id -- return (f 42, f True)
Try uncommenting the 'testNotOk' definition.
There's no "later" here at all. Two seperate definitions in a Haskell program act as if they have always been defined, are defined, and always will be defined, they are not dealt with in sequence (except for pattern matching but that doesn't apply here). Instead, what's going on here is scoping. The f in testOk is a different f to the one in testNotOkay, distinguished by their scope. Finally, this is not how you use a let in a do expression, here's how you should do it: testOk2 :: Monad m => m (Int, Bool) testOk2 = do let f = id return (f 42, f True) Thanks Tom Davie

On 09/13/2010 12:38 PM, Thomas Davie wrote:
On 13 Sep 2010, at 10:28, Gleb Alexeyev wrote:
On 09/13/2010 12:23 PM, Michael Lazarev wrote:
2010/9/13 Henning Thielemann
: It means that variables bound by let, may be instantiated to different types later.
Can you give an example, please?
testOk = let f = id in (f 42, f True)
--testNotOk :: Monad m => m (Int, Bool) --testNotOk = do f<- return id -- return (f 42, f True)
Try uncommenting the 'testNotOk' definition.
There's no "later" here at all.
Two seperate definitions in a Haskell program act as if they have always been defined, are defined, and always will be defined, they are not dealt with in sequence (except for pattern matching but that doesn't apply here).
Instead, what's going on here is scoping. The f in testOk is a different f to the one in testNotOkay, distinguished by their scope.
Finally, this is not how you use a let in a do expression, here's how you should do it:
testOk2 :: Monad m => m (Int, Bool) testOk2 = do let f = id return (f 42, f True)
I don't understand, I'm afraid. Michael Lazarev asked for example on the difference between let-bound and lambda-bound values. testNotOk definition mirrors the structure of the testOk definition, but testNotOk is, pardon my pun, not ok, because f is let-bound and, therefore, monomorphic, while f in the first definition is polymorphic. I never implied that definitions are processed in some sort of sequence, nor I stated that the two f's are somehow related.
Thanks
Tom Davie

On Mon, 13 Sep 2010, Gleb Alexeyev wrote:
On 09/13/2010 12:38 PM, Thomas Davie wrote:
There's no "later" here at all.
Two seperate definitions in a Haskell program act as if they have always been defined, are defined, and always will be defined, they are not dealt with in sequence (except for pattern matching but that doesn't apply here).
I don't understand, I'm afraid. Michael Lazarev asked for example on the difference between let-bound and lambda-bound values. testNotOk definition mirrors the structure of the testOk definition, but testNotOk is, pardon my pun, not ok, because f is let-bound and, therefore, monomorphic, while f in the first definition is polymorphic.
I never implied that definitions are processed in some sort of sequence, nor I stated that the two f's are somehow related.
I think the "later" refered to my words. With "later" I meant somewhere below the binding in the do-block.

Thanks for examples and pointers. Since I came from Lisp, it never occurred to me that let and lambda are different constructs in Haskell. I thought that let x = y in f is really (\x -> f) y It turns out that let is about declarations which are not the same as function applications above. So, here is a little followup for this experiment. Prelude> :t (\f -> (f 42, f True)) <interactive>:1:10: No instance for (Num Bool) arising from the literal `42' at <interactive>:1:10-11 Possible fix: add an instance declaration for (Num Bool) In the first argument of `f', namely `42' In the expression: f 42 In the expression: (f 42, f True) If I understand correctly, compiler first checks f 42, and deduces that f must be of type (Num a) => a -> b. Then it checks f True, and it does not satisfy the previously deduced type for f, because type of True is not in Num class. This works: Prelude> :t (\f -> (f 42, f 41.9)) (\f -> (f 42, f 41.9)) :: (Fractional t1) => (t1 -> t) -> (t, t) It just managed to deduce a type for f :: (Fractional t1) => (t1 -> t) And this, of course, works: Prelude> let f = id in (f 42, f True) (42,True) If I understand correctly again, it happens because f is a definition, which gets substituted to f 42 and to f True.

On Sep 13, 12:22 pm, Michael Lazarev
Thanks for examples and pointers.
Since I came from Lisp, it never occurred to me that let and lambda are different constructs in Haskell.
You're not alone, I didn't believe my eyes when I first read about the difference (I learned Scheme, but the difference doesn't really matter here)
Prelude> :t (\f -> (f 42, f True))
<interactive>:1:10: No instance for (Num Bool) [...] If I understand correctly, compiler first checks f 42, and deduces that f must be of type (Num a) => a -> b. Then it checks f True, and it does not satisfy the previously deduced type for f, because type of True is not in Num class.
That's a reasonable explanation - I don't know details about the order, and
:t (\f -> (f True, f 42)) gives the same mistake; I believe that type-checking is still in order (left-to-right or right-to-left, we can't say) but whether (Num t) and Bool is deduced first does not matter, because unifying them is commutative.
This works:
Prelude> :t (\f -> (f 42, f 41.9)) (\f -> (f 42, f 41.9)) :: (Fractional t1) => (t1 -> t) -> (t, t)
It just managed to deduce a type for f :: (Fractional t1) => (t1 -> t)
Yes, but that's not polymorphic in the sense people are using - ghc searched for a common type, and realized that there's a whole class of types where both 42 and 41.9 belong - Fractional.
And this, of course, works:
Prelude> let f = id in (f 42, f True) (42,True)
If I understand correctly again, it happens because f is a definition, which gets substituted to f 42 and to f True.
Almost true. In evaluating let a = b, b must be evaluated before binding it to a, and a value has a type. Actually, one can define the "substitution semantics" you describe, but interesting things happen then. Section 22.7 of Types and Programming Languages from Pierce discusses exactly this, in the context of ML* - I guess you can also Google for let-polymorphism. The real thing is that when you write "let f = id in", the type of a is deduced (internally "a -> a"), and then generalized (internally "forall a. a -> a"). The difference is that in the first case, GHC will try to discover what type "a" is by unifying it with the type of arguments of f - unification would produce type equations "a = Int, a = Bool" in the above example. The forall prevents that. Note that I wrote "internally", even if you can actually write both types (the second with an extension, IIRC) because in many cases when you write a top-level type-signature: f :: a -> a that is also implicitly generalized. * Note that in the end Pierce explains a problem with polymorphic references and its solution, but this does not applies to Haskell because of the lack of side effects (or you could say that unsafePerformIO does allow such things, and that's why it's not type- safe).

On 9/13/10 6:22 AM, Michael Lazarev wrote:
Thanks for examples and pointers.
Since I came from Lisp, it never occurred to me that let and lambda are different constructs in Haskell. I thought that let x = y in f is really (\x -> f) y It turns out that let is about declarations which are not the same as function applications above.
Right. This is a common mistake for people coming from Lisp, Scheme, and other untyped lambda calculi. In the untyped world it's fine to conflate let and lambda, because they only differ in how they're typed (and if you have no types...). The difference is that, for let-bindings, once you've figured out a type of the variable being bound, then that type can be "generalized". The exact process of generalization has some subtle details to watch out for, but suffice it to say that certain type variables are allowed to become universally quantified. Which means that you're allowed to use x at different types within f, provided all those different types are consistent with the generalized type. Whereas, lambda-bindings don't get generalized, and so they'll always be monomorphic (assuming Hindley--Milner inference without extensions like -XRankNTypes). This is necessary in order to catch numerous type errors, though Haskell lets you override this behavior by giving an explicitly polymorphic type signature if you have -XRankNTypes enabled. ... FWIW, a lot of the tricky details about generalization come from the way that Hindley--Milner inference is usually described. That is, since HM only allows prenex universal quantification, the quantifiers are usually left implicit. This in turn means it's not always clear when the unification variables used in type inference are actual type variables vs not. If we assume System F types instead and make all the quantifiers explicit, then it becomes much easier to explain the generalization process because we're being explicit about where variables are bound. -- Live well, ~wren

wren ng thornton wrote:
On 9/13/10 6:22 AM, Michael Lazarev wrote:
Thanks for examples and pointers.
Since I came from Lisp, it never occurred to me that let and lambda are different constructs in Haskell. I thought that let x = y in f is really (\x -> f) y It turns out that let is about declarations which are not the same as function applications above.
Right. This is a common mistake for people coming from Lisp, Scheme, and other untyped lambda calculi. In the untyped world it's fine to conflate let and lambda, because they only differ in how they're typed (and if you have no types...).
The difference is that, for let-bindings, once you've figured out a type of the variable being bound, then that type can be "generalized". The exact process of generalization has some subtle details to watch out for, but suffice it to say that certain type variables are allowed to become universally quantified. Which means that you're allowed to use x at different types within f, provided all those different types are consistent with the generalized type.
Whereas, lambda-bindings don't get generalized, and so they'll always be monomorphic (assuming Hindley--Milner inference without extensions like -XRankNTypes). This is necessary in order to catch numerous type errors,
Disclaimer: I am not a type system expert. Anyway, I thought the reason was that type inference for rank-n types (n>1) is undecidable. And therefore:
though Haskell lets you override this behavior by giving an explicitly polymorphic type signature if you have -XRankNTypes enabled.
...so that polymorphic types for arguments don't have to be inferred. I think it was in Milner's original paper where he tries to give some intuition why let and lambda are treated differently: even though we always have (\x -> e) y == let x = y in e which means that let can be translated to lambda, the converse is not true, since a lambda expression can appear in contexts other than (as the left hand side of) an application. Thus, let is syntactically more restrictive than lambda, which means we can be more liberal when typing it. In principle, the type-checker *could* be extended to infer polymorphic types for those lambda-bound variables where the lambda expression immediately gets applied to some other expression. In practice this would be of little use as these are exactly the situations where a let can (and should!) be used instead of a lambda. Cheers Ben

On 9/16/10 4:59 PM, Ben Franksen wrote:
wren ng thornton wrote:
The difference is that, for let-bindings, once you've figured out a type of the variable being bound, then that type can be "generalized". [...] Whereas, lambda-bindings don't get generalized, and so they'll always be monomorphic (assuming Hindley--Milner inference without extensions like -XRankNTypes). This is necessary in order to catch numerous type errors,
Disclaimer: I am not a type system expert. Anyway, I thought the reason was that type inference for rank-n types (n>1) is undecidable. And therefore:
Indeed.
though Haskell lets you override this behavior by giving an explicitly polymorphic type signature if you have -XRankNTypes enabled.
...so that polymorphic types for arguments don't have to be inferred.
Exactly. If all higher-rank types are known, then it is decidable to (a) check the higher-rank types, and (b) infer everything else.
I think it was in Milner's original paper where he tries to give some intuition why let and lambda are treated differently:
Yes, the original paper talks about generalization and why we want/need it. Though the paper is fairly old, terminology wise. It predates a lot of the work on higher-rank types, and so its explanations are encumbered by not having explicit quantifiers and the terminology for discussing them. That's why I brought up the fact that generalization is made to sound more complicated because of people's attempts to oversimplify and "remove quantifiers", i.e. make them implicit; it's simpler to just leave them explicit. Other than that, it's an excellent paper and and an enjoyable read.
even though we always have
(\x -> e) y == let x = y in e
which means that let can be translated to lambda, the converse is not true,
Not exactly. Note that when compilers do CPS conversion, everything is converted into let-binding and continuations (i.e., longjump/goto with value passing). It's just dual to the everything-is-lambda world, nothing special. -- Live well, ~wren

wren ng thornton wrote:
On 9/16/10 4:59 PM, Ben Franksen wrote:
even though we always have
(\x -> e) y == let x = y in e
which means that let can be translated to lambda, the converse is not true,
Not exactly. Note that when compilers do CPS conversion, everything is converted into let-binding and continuations (i.e., longjump/goto with value passing). It's just dual to the everything-is-lambda world, nothing special.
I meant "not possible" as in "by a source-to-source transformation in a simple core-ML-like language" (such as is used in most introductions to HM-style type inference). If you translate to a language with mutable state and/or built-in continuations then things are different, of course. Do you know a good online introduction to CPS conversion that explains the kind of duality you mentioned? Cheers Ben

On 9/17/10 4:04 PM, Ben Franksen wrote:
wren ng thornton wrote:
Note that when compilers do CPS conversion, everything is converted into let-binding and continuations (i.e., longjump/goto with value passing). It's just dual to the everything-is-lambda world, nothing special.
Do you know a good online introduction to CPS conversion that explains the kind of duality you mentioned?
Not off hand. These papers[1] give a pretty good explanation of CPS conversion and the other stages of compilation, from the perspective of proving compiler correctness. But they don't say much about the duality aspect of it. For the duality side of things, Andrzej Filinski's masters thesis[2] is an excellent read; though it doesn't say much about compilation IIRC. Basically the duality comes when decomposing a whole program. When we separate a term from its context, which part is in control? Control is just a perspective, so you could be outside the function looking in, or inside the function looking out. One person's push is another's pull. This is the same sort of thing as build/foldr vs unfoldr/destroy forms of list fusion: once we separate the F-algebra and F-coalgebra, which one should get the recursion?[3] [1] @InProceedings{ Chlipala2007:tpc, author = "Adam Chlipala", title = "A Certified Type-Preserving Compiler from Lambda Calculus to Assembly Language", year = 2007, booktitle = "PLDI", url = {http://adam.chlipala.net/papers/CtpcPLDI07/CtpcPLDI07.pdf} } @Article{ Leroy2009:compcert, author = "Xavier Leroy", title = "A Formally-Verified Compiler Back-End", year = 2009, journal = "Journal of Automated Reasoning", url = {http://pauillac.inria.fr/~xleroy/publi/compcert-backend.pdf} } [2] http://www.diku.dk/~andrzej/papers/DCaCD.ps.gz [3] http://www.haskell.org/pipermail/haskell-cafe/2010-August/082592.html -- Live well, ~wren

wren ng thornton wrote:
On 9/17/10 4:04 PM, Ben Franksen wrote:
wren ng thornton wrote:
Note that when compilers do CPS conversion, everything is converted into let-binding and continuations (i.e., longjump/goto with value passing). It's just dual to the everything-is-lambda world, nothing special.
Do you know a good online introduction to CPS conversion that explains the kind of duality you mentioned?
Not off hand. These papers[1] give a pretty good explanation of CPS conversion and the other stages of compilation, from the perspective of proving compiler correctness. But they don't say much about the duality aspect of it. For the duality side of things, Andrzej Filinski's masters thesis[2] is an excellent read; though it doesn't say much about compilation IIRC.
Thanks, I'll take a look.
Basically the duality comes when decomposing a whole program. When we separate a term from its context, which part is in control? Control is just a perspective, so you could be outside the function looking in, or inside the function looking out. One person's push is another's pull. This is the same sort of thing as build/foldr vs unfoldr/destroy forms of list fusion: once we separate the F-algebra and F-coalgebra, which one should get the recursion?[3]
Ok, this gives me some vague intuition. Cheers Ben

On Mon, Sep 13, 2010 at 8:21 AM, Alexander Kotelnikov
And, also, would it make any difference if
do {p <- e; stmts} = let ok p = do {stmts} ok _ = fail "..." in e >>= ok
is redefined as "e >>= (\p -> do {stmts})"?
This is the magic that allows pattern-match failure in a do expression to return a normal result. Notice that "fail" and not "error" is called - each Monad has its own fail method, so that for example: uncons :: [a] -> Maybe (a, [a]) uncons xs = do { (x:xs) <- return xs; return (x, xs) } evaluates to Nothing rather than causing an exception when xs is empty. That this implementation detail ends up in the Monad class is regarded by many as untidy, though.
participants (9)
-
Alexander Kotelnikov
-
Ben Franksen
-
Ben Millwood
-
Gleb Alexeyev
-
Henning Thielemann
-
Michael Lazarev
-
Paolo G. Giarrusso
-
Thomas Davie
-
wren ng thornton