strict, lazy, non-strict, eager

Most individuals of the Haskell community have long been maintaining a cognitive dissonance; some cases turn into plain hypocrisy. You might excuse it for its ancient and prominent origin: Richard Bird and/or Philip Wadler themselves wrote like "it is too lazy", "make it more strict" 13 years ago and surely more. But perpetuating it is not helping. I have not written this complaint until now because I have been waiting for unmistakable evidence, a smoking gun, a red hand so caught that you cannot explain away, for example you cannot explain that "one sentence is from one person, the other sentence is from a different person". So, on IRC in #haskell, from the same person, speaking on the same topic in the same context, in the same interval of 3 minutes (the first two sentences in the same minute): 1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics People, could you please make up your mind already? It has been more than 13 years. Denotational semantics: A. There are no computational steps. There is no termination, and there is no non-termination, since there are no steps to finish, and no steps to keep going. B. ⊥ represents "no information", not "non-termination". There is no "non-termination" to represent. C. fix id = ⊥ because ⊥ is the least fixed point of id, not because fix id non-terminates. There is nothing to terminate or non-terminate. D. You say strict, more strict, less strict; non-strict, more non-strict, less non-strict. You don't say eager, and you don't say lazy. Operational semantics: A. There is no ⊥; it does not appear in any sequence of computational steps, finitely long or infinitely long. B. You say eager, more eager, less eager; lazy, more lazy, less lazy; or speculative, more speculative, less speculative; or any other adjectives for evaluation strategies. You don't say strict, and you don't say non-strict. "Semantics", "semantically speaking": A. Which semantics, which semantically? There are two. B. Actually there are more, but apparently two is already enough to cause all kinds of incoherent statements. If I draw your attention to algebraic semantics, will you start saying "it is too lazy, need to make it more idempotent"?

See that's typically the speech that scares people away from Haskell...
--
The ⊥ is a lie.
2011/12/24 Albert Y. C. Lai
Most individuals of the Haskell community have long been maintaining a cognitive dissonance; some cases turn into plain hypocrisy. You might excuse it for its ancient and prominent origin: Richard Bird and/or Philip Wadler themselves wrote like "it is too lazy", "make it more strict" 13 years ago and surely more. But perpetuating it is not helping.
I have not written this complaint until now because I have been waiting for unmistakable evidence, a smoking gun, a red hand so caught that you cannot explain away, for example you cannot explain that "one sentence is from one person, the other sentence is from a different person".
So, on IRC in #haskell, from the same person, speaking on the same topic in the same context, in the same interval of 3 minutes (the first two sentences in the same minute):
1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics
People, could you please make up your mind already? It has been more than 13 years.
Denotational semantics: A. There are no computational steps. There is no termination, and there is no non-termination, since there are no steps to finish, and no steps to keep going. B. ⊥ represents "no information", not "non-termination". There is no "non-termination" to represent. C. fix id = ⊥ because ⊥ is the least fixed point of id, not because fix id non-terminates. There is nothing to terminate or non-terminate. D. You say strict, more strict, less strict; non-strict, more non-strict, less non-strict. You don't say eager, and you don't say lazy.
Operational semantics: A. There is no ⊥; it does not appear in any sequence of computational steps, finitely long or infinitely long. B. You say eager, more eager, less eager; lazy, more lazy, less lazy; or speculative, more speculative, less speculative; or any other adjectives for evaluation strategies. You don't say strict, and you don't say non-strict.
"Semantics", "semantically speaking": A. Which semantics, which semantically? There are two. B. Actually there are more, but apparently two is already enough to cause all kinds of incoherent statements. If I draw your attention to algebraic semantics, will you start saying "it is too lazy, need to make it more idempotent"?
______________________________**_________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

On 24/12/11 17:54, Yves Parès wrote:
See that's typically the speech that scares people away from Haskell...
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Wait what? I find it intriguing, helpful, provocative and potentially helpful toward the common goal of helping others. I am interested in further commentary. I'm not scared and you shouldn't be either. -- Tony Morris http://tmorris.net/

On Dec 24, 2011, at 6:22 PM, Tony Morris wrote:
Wait what?
I find it intriguing, helpful, provocative and potentially helpful toward the common goal of helping others. I am interested in further commentary. I'm not scared and you shouldn't be either.
Asking honest questions is imminently reasonable; accusing others of being incompetent hypocrites --- especially when you go to great length to make it clear that this is *exactly* what you are doing --- is not. The former is helpful, but the latter is poisonous. Yves's point is that doing the latter risks scaring people away. Cheers, Greg

On 24/12/11 18:41, Gregory Crosswhite wrote:
On Dec 24, 2011, at 6:22 PM, Tony Morris wrote:
Wait what?
I find it intriguing, helpful, provocative and potentially helpful toward the common goal of helping others. I am interested in further commentary. I'm not scared and you shouldn't be either. Asking honest questions is imminently reasonable; accusing others of being incompetent hypocrites --- especially when you go to great length to make it clear that this is *exactly* what you are doing --- is not. The former is helpful, but the latter is poisonous. Yves's point is that doing the latter risks scaring people away.
Cheers, Greg
If I am an incompetent hypocrite, I want to know more. Toughen up. -- Tony Morris http://tmorris.net/

On Sat, Dec 24, 2011 at 08:54:43AM +0100, Yves Parès wrote:
See that's typically the speech that scares people away from Haskell... -- The ⥠is a lie.
2011/12/24 Albert Y. C. Lai <[1]trebla@vex.net>
[ snip. ]
I find this sort of discussion is precisely what draws me to, and keeps me in pursuit of Haskell. There are many approaches to producing code that are designed (to use a charitable term) to be minimally frightening at the expense of rigour. The dogged and uncompromising attitude of many in the Haskell community is inspiring to me, knowing as I do that, while I don't *need* to follow the more rarified flights of mathematical fancy to get things done, the ecosystem in which I am choosing to work is being built by those who can. It's not as though the problem of writing good code has been solved. It appears to be quite hard! Isn't Haskell the place where the difficult and nit-picky questions ought to be raised and researched? It's too late to avoid success at all costs but please don't banish our precious pedantry! Scare on! Murray Campbell

I applaud the pedantry, but I must admit that the tone of the original
email is unusually harsh for the Haskell community, even though not so
harsh as to really make me (for example) scared.
On Sat, Dec 24, 2011 at 12:47 PM, Murray Campbell
On Sat, Dec 24, 2011 at 08:54:43AM +0100, Yves Parès wrote:
See that's typically the speech that scares people away from Haskell... -- The ⥠is a lie.
2011/12/24 Albert Y. C. Lai <[1]trebla@vex.net>
[ snip. ]
I find this sort of discussion is precisely what draws me to, and keeps me in pursuit of Haskell. There are many approaches to producing code that are designed (to use a charitable term) to be minimally frightening at the expense of rigour. The dogged and uncompromising attitude of many in the Haskell community is inspiring to me, knowing as I do that, while I don't *need* to follow the more rarified flights of mathematical fancy to get things done, the ecosystem in which I am choosing to work is being built by those who can.
It's not as though the problem of writing good code has been solved. It appears to be quite hard! Isn't Haskell the place where the difficult and nit-picky questions ought to be raised and researched?
It's too late to avoid success at all costs but please don't banish our precious pedantry!
Scare on!
Murray Campbell
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Principal Engineer, Mirantis Inc. http://www.mirantis.com/ Editor, http://fprog.ru/

On Dec 24, 2011, at 6:47 PM, Murray Campbell wrote:
It's too late to avoid success at all costs but please don't banish our precious pedantry!
Scare on!
Please don't misunderstand, I have absolutely no problems at all with people arguing voraciously and pedantically over ideas, as long as they are doing so respectfully, impersonally, politely and open-mindedly. In fact, like you I welcome such discourse, and the fact that this list provides such a nice environment for it is part of the reason why I like it so much. :-) (See, for example, the argument that I started a few days ago over whether some/many belong in Alternative a few days ago.) Having said that, there is no small difference between "Hey everyone! I have been thinking about something for a while, and I have an idea with which many of you will most likely strongly disagree. Here it is, and I welcome your feedback on it!" and "You all have long been maintaining a cognitive dissonance that in some cases turns into plain hypocrisy, and the only reason I hadn't complained about it until now was because I have been waiting for unmistakable evidence, a smoking gun, a red hand so caught that you cannot explain away. Here it is. Now people, could you please make up your mind already? It has been more than 13 years." The former invites open discussion in a friendly and positive environment, whereas the latter at best turns people off from wanting to respond positively, and at worst provokes angry responses that further poison the well. Cheers, Greg

I'm gonna clarify my point a little ^^.
In fact pedantry was involved. The way Albert started his original post was
pedantic.
The scaring effect was mostly caused by such discourse showing two things:
- Haskellers make use of obscure terms and distinctions (e.g. "denotational
semantics")
- ...whose meaning does not make a consensus amongst Haskellers *themselves*
!
Seeing that from an outsider point of view kind of makes you think: "OMG,
what are those hairy guys inventing weird concepts they're not even capable
to agree upon!" ^^
And to prove my second post -- as I expected -- has come someone (MigMit)
to question Albert's statements.
Plus no one is really stating clearly at which level the terms "eager",
"strict" and "lazy" should be involved (for instance in Haskell's
specification or in GHC's implementation).
2011/12/24 Gregory Crosswhite
On Dec 24, 2011, at 6:47 PM, Murray Campbell wrote:
It's too late to avoid success at all costs but please don't banish our precious pedantry!
Scare on!
Please don't misunderstand, I have absolutely no problems at all with people arguing voraciously and pedantically over ideas, as long as they are doing so respectfully, impersonally, politely and open-mindedly. In fact, like you I welcome such discourse, and the fact that this list provides such a nice environment for it is part of the reason why I like it so much. :-) (See, for example, the argument that I started a few days ago over whether some/many belong in Alternative a few days ago.)
Having said that, there is no small difference between
"Hey everyone! I have been thinking about something for a while, and I have an idea with which many of you will most likely strongly disagree. Here it is, and I welcome your feedback on it!"
and
"You all have long been maintaining a cognitive dissonance that in some cases turns into plain hypocrisy, and the only reason I hadn't complained about it until now was because I have been waiting for unmistakable evidence, a smoking gun, a red hand so caught that you cannot explain away. Here it is. Now people, could you please make up your mind already? It has been more than 13 years."
The former invites open discussion in a friendly and positive environment, whereas the latter at best turns people off from wanting to respond positively, and at worst provokes angry responses that further poison the well.
Cheers, Greg
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 24 Dec 2011, at 11:31, Albert Y. C. Lai wrote:
So, on IRC in #haskell, from the same person, speaking on the same topic in the same context, in the same interval of 3 minutes (the first two sentences in the same minute):
1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics
What's wrong with that?
Denotational semantics: A. There are no computational steps. There is no termination, and there is no non-termination, since there are no steps to finish, and no steps to keep going.
Incorrect. There ARE computational steps, of course, we just agree not to mention them when we talk about denotational semantics. Not mentioning something is not the same as stating that it doesn't exist.
B. ⊥ represents "no information", not "non-termination". There is no "non-termination" to represent.
There is, as there are computational steps. And the statement (2) above is not about denotational semantics only, so it's OK to talk about non-termination. Don't you know that you are allowed to use both semantics at the same time?
C. fix id = ⊥ because ⊥ is the least fixed point of id, not because fix id non-terminates.
There is nothing more obscure in science then the notion of "because". fix id = (_|_) for the reason that suits your purposes.
D. You say strict, more strict, less strict; non-strict, more non-strict, less non-strict. You don't say eager, and you don't say lazy.
Oh, right, there is nothing more important then using the words properly.
Operational semantics: A. There is no ⊥; it does not appear in any sequence of computational steps, finitely long or infinitely long. B. You say eager, more eager, less eager; lazy, more lazy, less lazy; or speculative, more speculative, less speculative; or any other adjectives for evaluation strategies. You don't say strict, and you don't say non-strict.
See above.

1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics
People, could you please make up your mind already? It has been more than 13 years.
I have to admit, I'm a bit confused what the complaint is. Edward

I wonder how the arrival of an anonymous anecdote on IRC was the
smoking gun needed to justify calling out the Haskell community on its
cognitive dissonance. Surely you would need some statistical evidence,
a public display from a very prominent member, or some officially
endorsed stance to convince anyone that "Most" of a community behaves
a certain way.
I understand why someone might want to call us out on a lack of
rigour. However, I have no idea why someone would hold their tongue on
the matter due to a lack of evidence, then commence hostilities once a
disjointed quote off IRC appears...
Albert: Was it the straw that broke the camel's back?
On Sat, Dec 24, 2011 at 9:41 PM, Edward Z. Yang
1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics
People, could you please make up your mind already? It has been more than 13 years.
I have to admit, I'm a bit confused what the complaint is.
Edward
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sat, Dec 24, 2011 at 2:31 AM, Albert Y. C. Lai
1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics
All three of these statements are true. The only potentially controversial one is 2, but any term that the operational semantics would identify as simple non-termination (which is invariably what they're talking about when they say 2; not some partially defined term) will be given denotation ⊥.
B. Actually there are more, but apparently two is already enough to cause all kinds of incoherent statements. If I draw your attention to algebraic semantics, will you start saying "it is too lazy, need to make it more idempotent"?
Yes, there are more than two. And they don't exist in completely separate vacuums from one another. Denotational and operational properties are sometimes (often?) correlated. And algebraic semantics is often the sweet spot for reasoning about the structure of the operational or denotational semantics of your code, without bringing in all the irrelevant details from the latter two. I can make a denotational semantics for System F where each term is denoted by its normal form (an operational concept). I think it's good to be clear on all these specifics, and people could do with a better recognition of the difference between (non-)strict and (lazy)eager (hint: you can have an eager, non-strict language). But it isn't necessarily a problem that people speak in terms of more than one at once. The different kinds of semantics aren't in conflict with one another. The main problem would be that such casual mixing prevents newcomers from learning the distinctions by osmosis. -- Dan

"I have not written this complaint until now because I have been waiting
for unmistakable evidence, a smoking gun, a red hand so caught that you
cannot explain away,"
It's not a murder trial! The number-one nice thing about the Haskell
community is that they _thoroughly_ listen to people. I think you'd get a
similarly informative discussion, sending just the second half of this
email.
That said, 2 points:
On Sat, Dec 24, 2011 at 1:49 PM, Dan Doel
[...] I think it's good to be clear on all these specifics, and people could do with a better recognition of the difference between (non-)strict and (lazy)eager (hint: you can have an eager, non-strict language). [...]
+1. I'm pretty sure _none_ of the Haskell texts have a definitive comparison of strict, non-strict, lazy, and eager. The Haskell Wiki has this (and only this) to say about Eager Evaluation: "Eager evaluation is an implementation of the strict semantics as can be found in OCaml and usually in imperative languages. A kind of opposite is lazy evaluation." [0] We can do better. On the other hand: I'd _strongly_ argue against "making up our minds" about definitions within the Haskell community. Most of these concepts aren't Haskell-specific. An example of something to avoid is our definitions of "concurrency" and "parallellism." We as a community have specific, good definitions of each term. [1] So does the Erlang community. [2] Yet the definitions don't have anything to do with each other, which makes talking across communities more difficult. amindfv / Tom [0] http://www.haskell.org/haskellwiki/Eager_evaluation [1] http://learnyousomeerlang.com/the-hitchhikers-guide-to-concurrency#dont-pani..., paragraph 4 [2] http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.h..., "Defining concurrency and parallelism"

2011/12/25 Tom Murphy
On the other hand: I'd _strongly_ argue against "making up our minds" about definitions within the Haskell community. Most of these concepts aren't Haskell-specific. An example of something to avoid is our definitions of "concurrency" and "parallellism." We as a community have specific, good definitions of each term. [1] So does the Erlang community. [2] Yet the definitions don't have anything to do with each other, which makes talking across communities more difficult.
amindfv / Tom
[0] http://www.haskell.org/haskellwiki/Eager_evaluation
[1] http://learnyousomeerlang.com/the-hitchhikers-guide-to-concurrency#dont-pani..., paragraph 4
[2] http://book.realworldhaskell.org/read/concurrent-and-multicore-programming.h..., "Defining concurrency and parallelism"
I kindly beg to differ. To me concurrency and parallelism have global and cross-language definitions. The links you gave don't only define "concurrency" and "parallelism" in absolute as they focus their definition around Erlang's and Haskell's *models *of concurrency/parallelism. Still the broad idea remains.
I'd _strongly_ argue against "making up our minds" about definitions within the Haskell community. Most of these concepts aren't Haskell-specific.
My referencial was Haskell-centric. And we can go by steps: first come to a consensus within the Haskellers and then give broad definitions that concerne every language.

On Sat, Dec 24, 2011 at 10:49 PM, Dan Doel
On Sat, Dec 24, 2011 at 2:31 AM, Albert Y. C. Lai
wrote: 1. a function f is strict if f ⊥ = ⊥ 2. ⊥ represents any computation which does not terminate, i.e. an exception or an infinite loop 3. "strict" describes the denotational semantics
All three of these statements are true. The only potentially controversial one is 2, but any term that the operational semantics would identify as simple non-termination (which is invariably what they're talking about when they say 2; not some partially defined term) will be given denotation ⊥.
B. Actually there are more, but apparently two is already enough to cause all kinds of incoherent statements. If I draw your attention to algebraic semantics, will you start saying "it is too lazy, need to make it more idempotent"?
Yes, there are more than two. And they don't exist in completely separate vacuums from one another. Denotational and operational properties are sometimes (often?) correlated. And algebraic semantics is often the sweet spot for reasoning about the structure of the operational or denotational semantics of your code, without bringing in all the irrelevant details from the latter two. I can make a denotational semantics for System F where each term is denoted by its normal form (an operational concept).
I think it's good to be clear on all these specifics, and people could do with a better recognition of the difference between (non-)strict and (lazy)eager (hint: you can have an eager, non-strict language).
Can you elaborate? That's apparently my blind spot.
But it isn't necessarily a problem that people speak in terms of more than one at once. The different kinds of semantics aren't in conflict with one another.
The main problem would be that such casual mixing prevents newcomers from learning the distinctions by osmosis.
-- Dan
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Principal Engineer, Mirantis Inc. http://www.mirantis.com/ Editor, http://fprog.ru/

On Sun, Dec 25, 2011 at 12:14 AM, Eugene Kirpichov
On Sat, Dec 24, 2011 at 10:49 PM, Dan Doel
wrote: I think it's good to be clear on all these specifics, and people could do with a better recognition of the difference between (non-)strict and (lazy)eager (hint: you can have an eager, non-strict language).
Can you elaborate? That's apparently my blind spot.
A while back, there was a paper on something called (I believe) optimistic evaluation. The strategy goes like this: when you evaluate 'f x', first you start evaluating 'x'. If that takes too long, or you encounter an exception, you (re)thunk it, and continue evaluating the body of f lazy style, in case you don't really need x. This is arguably eager, since you reduce arguments to functions immediately if possible. And it has some advantages over lazy evaluation in common cases. For instance, it avoids foldl building up a huge nested thunk that would cause a stack overflow. But it isn't strict, because const 5 undefined is still 5. You can also imagine sparking on every function application, so that arguments automatically get reduced in parallel, and as soon as possible. I think that's been determined to not be very effective, though. -- Dan

Thanks, this makes sense.
On Sun, Dec 25, 2011 at 10:03 AM, Dan Doel
On Sun, Dec 25, 2011 at 12:14 AM, Eugene Kirpichov
wrote: On Sat, Dec 24, 2011 at 10:49 PM, Dan Doel
wrote: I think it's good to be clear on all these specifics, and people could do with a better recognition of the difference between (non-)strict and (lazy)eager (hint: you can have an eager, non-strict language).
Can you elaborate? That's apparently my blind spot.
A while back, there was a paper on something called (I believe) optimistic evaluation. The strategy goes like this: when you evaluate 'f x', first you start evaluating 'x'. If that takes too long, or you encounter an exception, you (re)thunk it, and continue evaluating the body of f lazy style, in case you don't really need x.
This is arguably eager, since you reduce arguments to functions immediately if possible. And it has some advantages over lazy evaluation in common cases. For instance, it avoids foldl building up a huge nested thunk that would cause a stack overflow. But it isn't strict, because
const 5 undefined
is still 5.
You can also imagine sparking on every function application, so that arguments automatically get reduced in parallel, and as soon as possible. I think that's been determined to not be very effective, though.
-- Dan
-- Eugene Kirpichov Principal Engineer, Mirantis Inc. http://www.mirantis.com/ Editor, http://fprog.ru/

There are two flavours of MonadState, Control.Monad.State.Lazy and Control.Monad.State.Strict. There are two flavours of ByteString, Data.ByteString.Lazy and Data.Bytestring (whose doc says "strict"). There are two flavours of I/O libraries, lazy and strict. There are advices of the form: "the program uses too much memory because it is too lazy; try making this part more strict". Eventually, someone will ask what are "lazy" and "strict". Perhaps you answer this (but there are other answers, we'll see): "lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥, but it doesn't specify evaluation order." That doesn't answer the question. That begs the question: Why do libraries seem to make them a dichotomy, when they don't even talk about the same level? And the make-it-more-strict advice now becomes: "the program uses too much memory because of the default, known evaluation order; try making this part use an unknown evaluation order", and this unknown is supposed to use less memory because...? I answer memory questions like this: the program uses too much memory because it is too lazy---or nevermind "lazy", here is the current evaluation order of the specific compiler, this is why it uses much memory; now change this part to the other order, it uses less memory. I wouldn't bring in the denotational level; there is no need. (Sure, I use seq to change evaluation order, which may be overriden by optimizations in rare cases. So change my answer to: now add seq here, which normally uses the other order, but optimizations may override it in rare cases, so don't forget to test. Or use pseq.) I said "people, make up your mind". I do not mean a whole group of people for the rest of their lives make up the same mind and choose the same one semantics. I mean this: Each individual, in each context, for each problem, just how many levels of semantics do you need to solve it? (Sure sure, I know contexts that need 4. What about daily programming problems: time, memory, I/O order?) MigMit questioned me on the importance of using the words properly. Actually, I am fine with using the words improperly, too: "the program uses too much memory because it is too lazy, lazy refers to such-and-such evaluation order; try making this part more strict, strict refers to so-and-so evaluation order".

When I explain to people what strict/lazy/eager mean, I often say something
like :
- Adjectives eager and lazy apply *only* to a global evaluation method: *
eager* is C evaluation style and *lazy* is that of Haskell.
- Adjective strict can be applied *both* to a global evaluation method and
a specific function: if applied to an eval method then it's a synonym of
"strict", and if applied to a function f it means 'f ⊥ = ⊥' (which I detail
a little more), which is true for strict State monad for istance (>>= would
not allow its left argument to return ⊥).
Thus explaining why datatypes such as State or Bytestring exist in *strict *
and* lazy *flavours.
2011/12/28 Albert Y. C. Lai
There are two flavours of MonadState, Control.Monad.State.Lazy and Control.Monad.State.Strict. There are two flavours of ByteString, Data.ByteString.Lazy and Data.Bytestring (whose doc says "strict"). There are two flavours of I/O libraries, lazy and strict. There are advices of the form: "the program uses too much memory because it is too lazy; try making this part more strict". Eventually, someone will ask what are "lazy" and "strict". Perhaps you answer this (but there are other answers, we'll see):
"lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥, but it doesn't specify evaluation order."
That doesn't answer the question. That begs the question: Why do libraries seem to make them a dichotomy, when they don't even talk about the same level? And the make-it-more-strict advice now becomes: "the program uses too much memory because of the default, known evaluation order; try making this part use an unknown evaluation order", and this unknown is supposed to use less memory because...?
I answer memory questions like this: the program uses too much memory because it is too lazy---or nevermind "lazy", here is the current evaluation order of the specific compiler, this is why it uses much memory; now change this part to the other order, it uses less memory. I wouldn't bring in the denotational level; there is no need.
(Sure, I use seq to change evaluation order, which may be overriden by optimizations in rare cases. So change my answer to: now add seq here, which normally uses the other order, but optimizations may override it in rare cases, so don't forget to test. Or use pseq.)
I said "people, make up your mind". I do not mean a whole group of people for the rest of their lives make up the same mind and choose the same one semantics. I mean this: Each individual, in each context, for each problem, just how many levels of semantics do you need to solve it? (Sure sure, I know contexts that need 4. What about daily programming problems: time, memory, I/O order?)
MigMit questioned me on the importance of using the words properly. Actually, I am fine with using the words improperly, too: "the program uses too much memory because it is too lazy, lazy refers to such-and-such evaluation order; try making this part more strict, strict refers to so-and-so evaluation order".
______________________________**_________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

- Adjective strict can be applied *both* to a global evaluation method and a specific function: if applied to an eval method then it's a synonym of "strict"
I of course meant a synonym of *"eager"*. Sorry.
I admit this definition might be a little liberal, but it helps understand.
2011/12/28 Yves Parès
When I explain to people what strict/lazy/eager mean, I often say something like :
- Adjectives eager and lazy apply *only* to a global evaluation method: * eager* is C evaluation style and *lazy* is that of Haskell. - Adjective strict can be applied *both* to a global evaluation method and a specific function: if applied to an eval method then it's a synonym of "strict", and if applied to a function f it means 'f ⊥ = ⊥' (which I detail a little more), which is true for strict State monad for istance (>>= would not allow its left argument to return ⊥).
Thus explaining why datatypes such as State or Bytestring exist in *strict *and* lazy *flavours.
2011/12/28 Albert Y. C. Lai
There are two flavours of MonadState, Control.Monad.State.Lazy and
Control.Monad.State.Strict. There are two flavours of ByteString, Data.ByteString.Lazy and Data.Bytestring (whose doc says "strict"). There are two flavours of I/O libraries, lazy and strict. There are advices of the form: "the program uses too much memory because it is too lazy; try making this part more strict". Eventually, someone will ask what are "lazy" and "strict". Perhaps you answer this (but there are other answers, we'll see):
"lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥, but it doesn't specify evaluation order."
That doesn't answer the question. That begs the question: Why do libraries seem to make them a dichotomy, when they don't even talk about the same level? And the make-it-more-strict advice now becomes: "the program uses too much memory because of the default, known evaluation order; try making this part use an unknown evaluation order", and this unknown is supposed to use less memory because...?
I answer memory questions like this: the program uses too much memory because it is too lazy---or nevermind "lazy", here is the current evaluation order of the specific compiler, this is why it uses much memory; now change this part to the other order, it uses less memory. I wouldn't bring in the denotational level; there is no need.
(Sure, I use seq to change evaluation order, which may be overriden by optimizations in rare cases. So change my answer to: now add seq here, which normally uses the other order, but optimizations may override it in rare cases, so don't forget to test. Or use pseq.)
I said "people, make up your mind". I do not mean a whole group of people for the rest of their lives make up the same mind and choose the same one semantics. I mean this: Each individual, in each context, for each problem, just how many levels of semantics do you need to solve it? (Sure sure, I know contexts that need 4. What about daily programming problems: time, memory, I/O order?)
MigMit questioned me on the importance of using the words properly. Actually, I am fine with using the words improperly, too: "the program uses too much memory because it is too lazy, lazy refers to such-and-such evaluation order; try making this part more strict, strict refers to so-and-so evaluation order".
______________________________**_________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/**mailman/listinfo/haskell-cafehttp://www.haskell.org/mailman/listinfo/haskell-cafe

I got a glimpse of understanding of what you are talking about after
reading the wiki [1].
Still difficult to reason about the difference between lazy and
non-strict without taking a look at the text.
I hope somebody will make an effort to better explain the differences
and persist it in the wiki or in a feed (maybe planet haskell).
[1] http://www.haskell.org/haskellwiki/Lazy_vs._non-strict
2011/12/28 Yves Parès
- Adjective strict can be applied both to a global evaluation method and a specific function: if applied to an eval method then it's a synonym of "strict"
I of course meant a synonym of "eager". Sorry.
I admit this definition might be a little liberal, but it helps understand.
2011/12/28 Yves Parès
When I explain to people what strict/lazy/eager mean, I often say something like :
- Adjectives eager and lazy apply only to a global evaluation method: eager is C evaluation style and lazy is that of Haskell. - Adjective strict can be applied both to a global evaluation method and a specific function: if applied to an eval method then it's a synonym of "strict", and if applied to a function f it means 'f ⊥ = ⊥' (which I detail a little more), which is true for strict State monad for istance (>>= would not allow its left argument to return ⊥).
Thus explaining why datatypes such as State or Bytestring exist in strict and lazy flavours.
2011/12/28 Albert Y. C. Lai
There are two flavours of MonadState, Control.Monad.State.Lazy and Control.Monad.State.Strict. There are two flavours of ByteString, Data.ByteString.Lazy and Data.Bytestring (whose doc says "strict"). There are two flavours of I/O libraries, lazy and strict. There are advices of the form: "the program uses too much memory because it is too lazy; try making this part more strict". Eventually, someone will ask what are "lazy" and "strict". Perhaps you answer this (but there are other answers, we'll see):
"lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥, but it doesn't specify evaluation order."
That doesn't answer the question. That begs the question: Why do libraries seem to make them a dichotomy, when they don't even talk about the same level? And the make-it-more-strict advice now becomes: "the program uses too much memory because of the default, known evaluation order; try making this part use an unknown evaluation order", and this unknown is supposed to use less memory because...?
I answer memory questions like this: the program uses too much memory because it is too lazy---or nevermind "lazy", here is the current evaluation order of the specific compiler, this is why it uses much memory; now change this part to the other order, it uses less memory. I wouldn't bring in the denotational level; there is no need.
(Sure, I use seq to change evaluation order, which may be overriden by optimizations in rare cases. So change my answer to: now add seq here, which normally uses the other order, but optimizations may override it in rare cases, so don't forget to test. Or use pseq.)
I said "people, make up your mind". I do not mean a whole group of people for the rest of their lives make up the same mind and choose the same one semantics. I mean this: Each individual, in each context, for each problem, just how many levels of semantics do you need to solve it? (Sure sure, I know contexts that need 4. What about daily programming problems: time, memory, I/O order?)
MigMit questioned me on the importance of using the words properly. Actually, I am fine with using the words improperly, too: "the program uses too much memory because it is too lazy, lazy refers to such-and-such evaluation order; try making this part more strict, strict refers to so-and-so evaluation order".
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I did read other wiki pages, and I guess I finally got it.
Anyone who still feel lost, take a look at them [1,2,3,4].
If the HaskellWiki is right, then the Wikipedia article for evaluation
strategies [5] is a bit misleading, as it classifies optimistic
evaluation under "nondeterministic strategies" when it should be under
the non-strict section.
Also, it mixes the word evaluation with strict and non-strict, e.g.:
"In non-strict evaluation, arguments to a function are not evaluated
unless they are actually used in the evaluation of the function body."
This is lazy evaluation. A non-strict implementation may evaluate the
arguments of a function before calling it. The main diference is:
Strict semantics: the value of the function is 100% dependent of the
value of all arguments, any arguments yielding
bottom/error/non-termination/exception will result in the same
_error-value_ for the function.
A correct implementation of strict semantics is eager evaluation,
where all the arguments are evaluated before evaluating the function.
Non-strict semantics: the value of the function may not need it's
arguments' values to be produced, if the function can produce it's
value without the need of an argument's value, the function evaluates
correctly even if the unnused argument yields
bottom/error/non-termination/exception.
Lazy evaluation is one implementation of non-strict semantics, where
the arguments are evaluated only when they are needed.
Despite Haskell (GHC) don't do this, (0 * _|_) may yield 0 on a
non-strict implementation.
Did I got it right?
[1] http://www.haskell.org/haskellwiki/Non-strict_semantics
[2] http://www.haskell.org/haskellwiki/Strict_semantics
[3] http://www.haskell.org/haskellwiki/Lazy_evaluation
[4] http://www.haskell.org/haskellwiki/Eager_evaluation
[5] http://en.wikipedia.org/wiki/Evaluation_strategy
2011/12/28 Thiago Negri
I got a glimpse of understanding of what you are talking about after reading the wiki [1].
Still difficult to reason about the difference between lazy and non-strict without taking a look at the text.
I hope somebody will make an effort to better explain the differences and persist it in the wiki or in a feed (maybe planet haskell).
[1] http://www.haskell.org/haskellwiki/Lazy_vs._non-strict
2011/12/28 Yves Parès
: - Adjective strict can be applied both to a global evaluation method and a specific function: if applied to an eval method then it's a synonym of "strict"
I of course meant a synonym of "eager". Sorry.
I admit this definition might be a little liberal, but it helps understand.
2011/12/28 Yves Parès
When I explain to people what strict/lazy/eager mean, I often say something like :
- Adjectives eager and lazy apply only to a global evaluation method: eager is C evaluation style and lazy is that of Haskell. - Adjective strict can be applied both to a global evaluation method and a specific function: if applied to an eval method then it's a synonym of "strict", and if applied to a function f it means 'f ⊥ = ⊥' (which I detail a little more), which is true for strict State monad for istance (>>= would not allow its left argument to return ⊥).
Thus explaining why datatypes such as State or Bytestring exist in strict and lazy flavours.
2011/12/28 Albert Y. C. Lai
There are two flavours of MonadState, Control.Monad.State.Lazy and Control.Monad.State.Strict. There are two flavours of ByteString, Data.ByteString.Lazy and Data.Bytestring (whose doc says "strict"). There are two flavours of I/O libraries, lazy and strict. There are advices of the form: "the program uses too much memory because it is too lazy; try making this part more strict". Eventually, someone will ask what are "lazy" and "strict". Perhaps you answer this (but there are other answers, we'll see):
"lazy refers to such-and-such evaluation order. strict refers to f ⊥ = ⊥, but it doesn't specify evaluation order."
That doesn't answer the question. That begs the question: Why do libraries seem to make them a dichotomy, when they don't even talk about the same level? And the make-it-more-strict advice now becomes: "the program uses too much memory because of the default, known evaluation order; try making this part use an unknown evaluation order", and this unknown is supposed to use less memory because...?
I answer memory questions like this: the program uses too much memory because it is too lazy---or nevermind "lazy", here is the current evaluation order of the specific compiler, this is why it uses much memory; now change this part to the other order, it uses less memory. I wouldn't bring in the denotational level; there is no need.
(Sure, I use seq to change evaluation order, which may be overriden by optimizations in rare cases. So change my answer to: now add seq here, which normally uses the other order, but optimizations may override it in rare cases, so don't forget to test. Or use pseq.)
I said "people, make up your mind". I do not mean a whole group of people for the rest of their lives make up the same mind and choose the same one semantics. I mean this: Each individual, in each context, for each problem, just how many levels of semantics do you need to solve it? (Sure sure, I know contexts that need 4. What about daily programming problems: time, memory, I/O order?)
MigMit questioned me on the importance of using the words properly. Actually, I am fine with using the words improperly, too: "the program uses too much memory because it is too lazy, lazy refers to such-and-such evaluation order; try making this part more strict, strict refers to so-and-so evaluation order".
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thiago Negri
Lazy evaluation is one implementation of non-strict semantics, where the arguments are evaluated only when they are needed.
I would say this: * non-strict semantics require that no argument is evaluated unless needed. * lazy evaluation is an implementation of non-strict semantics in which no argument is evaluated more than once. As an example of something other than lazy, normal order reduction is non-strict, but arguments may be evaluated multiple times. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

Thiago Negri
2011/12/28 Jon Fairbairn
: * non-strict semantics require that no argument is evaluated unless needed.
That's not the case on optimistic evaluation.
Oops, yes. I should have said something like “non-strict semantics require that evaluation should terminate if there is a terminating reduction order” but went for something snappier (and missed). -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html (updated 2010-09-14)

On 12/28/11 10:23 AM, Jon Fairbairn wrote:
Thiago Negri
writes: Lazy evaluation is one implementation of non-strict semantics, where the arguments are evaluated only when they are needed.
I would say this:
* non-strict semantics require that no argument is evaluated unless needed.
I'm not sure that's quite right. As mentioned upthread, you can have eager non-strict languages. I think the more correct way to view it is that strict semantics require that non-termination of evaluating arguments entails non-termination of evaluating the application ---i.e., if [[ x ]] == _|_ then [[ f x ]] == _|_---, whereas non-strict semantics do not have this requirement. Thus, we're allowed to evaluate unused arguments, provided that doing so does not inhibit us from giving the proper result for evaluating the application. Some possibilities would be: to fork off a thread for evaluation of each subterm, or to eagerly evaluate arguments optimistically but then fall back to a non-strict evaluation model if the argument takes too long to finish, or to restrict ourselves to a total language and then use any strict semantics we like (since strict and non-strict coincide if there is no bottom),... -- Live well, ~wren

wren ng thornton
On 12/28/11 10:23 AM, Jon Fairbairn wrote:
Thiago Negri
writes: Lazy evaluation is one implementation of non-strict semantics, where the arguments are evaluated only when they are needed.
I would say this:
* non-strict semantics require that no argument is evaluated unless needed.
I'm not sure that's quite right.
I’m sure it’s not right (as was pointed out a while ago). I was in too much of a hurry to get to the next bit, namely giving a description of the difference between non-strict and lazy. Perhaps what I should have said to be almost as succinct but this time accurate is “non-strict semantics requires that the evaluation strategy terminate if there is any evaluation strategy that terminates”? -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

Full beta-reduction is certainly not strict but also doesn't guarantee
terminate even where it is possible (i.e. it might indefinitely unfold a
value without making progress). I don't think there is much you can say
about non-strictness and termination.
Regards,
Dave
On Mon, Jan 9, 2012 at 3:01 AM, Jon Fairbairn
Perhaps what I should have said to be almost as succinct but this time accurate is “non-strict semantics requires that the evaluation strategy terminate if there is any evaluation strategy that terminates”?
-- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

(I’m somewhat dismayed that the error in my preliminary remark
has overshadowed the point of my original message — which was
about the distinction between lazy and non-strict. However…)
David Barbour
Full beta-reduction is certainly not strict
What, precisely, do you mean by “full beta-reduction”?
but also doesn't guarantee terminate even where it is possible (i.e. it might indefinitely unfold a value without making progress).
That sounds very much to me like being overly strict.
I don't think there is much you can say about non-strictness and termination.
I would hope there would be, as that (or at least productivity) is the point of saying that Haskell has non-strict semantics.
On Mon, Jan 9, 2012 at 3:01 AM, Jon Fairbairn
wrote: Perhaps what I should have said to be almost as succinct but this time accurate is “non-strict semantics requires that the evaluation strategy terminate if there is any evaluation strategy that terminates”?
-- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

On 12/28/11 6:47 AM, Thiago Negri wrote:
I got a glimpse of understanding of what you are talking about after reading the wiki [1].
Still difficult to reason about the difference between lazy and non-strict without taking a look at the text.
One way to keep them distinct is to recall that "lazy" is quite a bit more specific than "non-strict". Let's take the wiki's definition: non-strict means evaluation from outside in (however problematic that may be). Now, just because we've specified this directionality of evaluation, that doesn't mean we've said everything we need to say about evaluation order. In particular, there are two common/obvious implementations of non-strict semantics: call-by-name, and call-by-need. In call-by-name evaluation, we just pass in the argument to functions as a literal expression. This is easy to reason about mathematically, however it has poor performance because it can cause duplication of effort. Namely, if a function uses its argument N times, then beta-reduction will create N copies of the argument, each of which will be evaluated (or not) separately. In call-by-need (aka "lazy") evaluation, we don't just pass the expression in duplicating it if need be. Instead, we create a thunk and replace all references to the argument with references/pointers to the thunk. In this way, we can share the evaluation effort since we only have to evaluate/mutate the thunk once and the result can be shared across all use sites. In other words, in call-by-name beta-reduction is the familiar: (\x. a) $ b ----------- [x |-> b] a where [_|->_]_ is literal substitution of expressions for variables in expressions. Whereas in call-by-need, beta-reduction is something like: (\x. a) $ b ----------- let x = b in a where let_=_in_ is a primitive construct for expressing shared substructure. Notably, in call-by-need the locus of evaluation shifts to a, so we evaluate under let_=_in_. The locus of evaluation only ever moves to b if indeed x is forced within a. In practice, GHC (and other Haskell compilers) are not lazy. That is, they do not exclusively use the call-by-need evaluation order. Instead, there is a hybridization of call-by-name, call-by-need, and call-by-value, the choice of which being based on the strictness analyzer and other optimization passes. The fact that GHC is allowed to do this can still call itself "Haskell" stems from the fact that the semantics of Haskell are defined merely as being non-strict, rather than being more specific. -- Live well, ~wren
participants (16)
-
Albert Y. C. Lai
-
Dan Doel
-
David Barbour
-
Edward Z. Yang
-
Eugene Kirpichov
-
Gregory Crosswhite
-
Jon Fairbairn
-
Lyndon Maydwell
-
MigMit
-
Murray Campbell
-
Thiago Negri
-
Tom Murphy
-
Tony Morris
-
wren ng thornton
-
Yves Parès
-
Yves Parès