
I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console. Quite obviously that's not it. So how is laziness defined in Haskell? I remember vaguely someone saying that pattern matching on a value forces it to be evaluated. Is that right? What else? This is one of the things that just boggles my mind everytime I try to wrap it around this thing called Haskell ;) Cheers, TJ

G'day all.
Quoting TJ
I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console. Quite obviously that's not it. So how is laziness defined in Haskell?
It means that the program behaves as if "things" are evaluated if and only if they are needed. "Needed" in the Haskell sense, means "needed to do I/O". It does not, of course, guarantee the order of evaluations, merely that the program acts as if it will only be evaluated if it's needed. It also doesn't guarantee that unneeded evaluation won't take place, it just means that if it does, it will happen in such a way that it won't destroy the illusion. "Full laziness", which Haskell does not guarantee but does allow, goes one step further: A "thing" will be evaluated AT MOST once if it's ever needed.
I remember vaguely someone saying that pattern matching on a value forces it to be evaluated. Is that right? What else?
Remember that the pattern match code itself will only be evaluated if it needs to be for some other reason, which eventually boils down to I/O. (Note: We're assuming the absence of seq, which confuses everything.)
This is one of the things that just boggles my mind everytime I try to wrap it around this thing called Haskell ;)
The cool part is that for the most part, it doesn't matter. It just works. If you ever come across a case where it doesn't just work (e.g. if you need a tilde pattern), you'll be ready for it. Cheers, Andrew Bromage

On 2/5/07, ajb@spamcop.net
Quoting TJ
: I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console. Quite obviously that's not it. So how is laziness defined in Haskell?
It means that the program behaves as if "things" are evaluated if and only if they are needed. "Needed" in the Haskell sense, means "needed to do I/O".
So it's just IO which makes things run huh? OK that's basically what I said there. Cool.
This is one of the things that just boggles my mind everytime I try to wrap it around this thing called Haskell ;)
The cool part is that for the most part, it doesn't matter. It just works. If you ever come across a case where it doesn't just work (e.g. if you need a tilde pattern), you'll be ready for it.
I despise using what I don't understand. It's a big problem but one
which is more insurmountable than understanding Haskell, I think...
On 2/5/07, Andrew Wagner
I found it useful to work through an example where lazy evaluation was important, and wrote it up in a tutorial. It may or may not help you, no guarantees, but here it is: http://www.haskell.org/haskellwiki/Haskell/Lazy_Evaluation
With the code from your tutorial, magic :: Int -> Int -> [Int] magic 0 _ = [] magic m n = m : (magic n (m+n)) getIt :: [Int] -> Int -> Int getIt [] _ = 0 getIt (x:xs) 1 = x getIt (x:xs) n = getIt xs (n-1) and the example expression, getIt (magic 1 1) 3 It only gets run (starts pattern matching and all) if I do a print on it, or run it from GHCi (which will do the"print" for me), right? Thanks, TJ

tjay.dreaming:
On 2/5/07, ajb@spamcop.net
wrote: Quoting TJ
: I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console. Quite obviously that's not it. So how is laziness defined in Haskell?
It means that the program behaves as if "things" are evaluated if and only if they are needed. "Needed" in the Haskell sense, means "needed to do I/O".
So it's just IO which makes things run huh? OK that's basically what I said there. Cool.
Exactly, no IO, no computation required: $ cat A.hs main = do let v = last [1..] -- could be very slow... return () $ time ./a.out ./a.out 0.00s user 0.00s system 0% cpu 0.003 total :) Laziness: you know it makes sense. -- Don

G'day all. tjay.dreaming:
So it's just IO which makes things run huh? OK that's basically what I said there. Cool.
Yeah, but you said "output". Sending a signal to another process in Unix is I/O, which would force the process id to be evaluated, but there's no output as such. Cheers, Andrew Bromage

I went through the entry on laziness on the wikipedia wikibook. Very
nice. The wikibook sure has grown a lot since I last visited.
http://en.wikibooks.org/wiki/Haskell/Laziness
I believe I've got it now. By it I mean the understanding of laziness
in Haskell. Even though Haskell is, strictly speaking, not lazy, but
non-strict. "It" being but read and thought about, and not practiced,
might prove _itself_ to become Undefined as I evaluate it further. :D
Cheers,
TJ
On 2/5/07, ajb@spamcop.net
G'day all.
tjay.dreaming:
So it's just IO which makes things run huh? OK that's basically what I said there. Cool.
Yeah, but you said "output". Sending a signal to another process in Unix is I/O, which would force the process id to be evaluated, but there's no output as such.
Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

TJ said:
I went through the entry on laziness on the wikipedia wikibook. Very nice. The wikibook sure has grown a lot since I last visited.
Thanks for the link. I hadn't seen that before. Although it covers irrefutable (lazy) pattern matching in the second section, it does appear to miss the point that let bindings are always irrefutable. Thus, there is no difference between these two: let (x,y) = foo in ... let ~(x,y) = foo in ... I need to have a closer look when I have some more time, but I think this might have lead to some inaccuracies in the first section (Thunks and WHNF). FYI, the Haskell 98 Report defines the non-recursive let binding in terms of a case expression with an irrefutable pattern match. So, the following are equivalent: let (x,y) = foo in ... case foo of ~(x,y) -> ... But note that function arguments are defined in terms of case expressions (whose patterns are refutable), so the following are *not* equivalent (example from the wikibook): let f (x,y) = 1 in f undefined let f ~(x,y) = 1 in f undefined Confused again?
I believe I've got it now. By it I mean the understanding of laziness in Haskell. Even though Haskell is, strictly speaking, not lazy, but non-strict. "It" being but read and thought about, and not practiced, might prove _itself_ to become Undefined as I evaluate it further. :D
Nicely put. I think you are getting it. :-)

G'day all.
Quoting Matthew Brecknell
Although it covers irrefutable (lazy) pattern matching in the second section, it does appear to miss the point that let bindings are always irrefutable.
Thus, there is no difference between these two:
let (x,y) = foo in ... let ~(x,y) = foo in ...
let (x,()) = (1,undefined) in x let (x,~()) = (1,undefined) in x Cheers, Andrew Bromage

I said:
Although it covers irrefutable (lazy) pattern matching in the second section, it does appear to miss the point that let bindings are always irrefutable.
Thus, there is no difference between these two:
let (x,y) = foo in ... let ~(x,y) = foo in ...
Andrew Bromage said:
let (x,()) = (1,undefined) in x let (x,~()) = (1,undefined) in x
In other words, the irrefutability of a pattern match does not distribute inside the top-level data constructor of the pattern. See also the second rule in section 3.17.2 of the Revised Haskell 98 Report (Informal Semantics of Pattern Matching).

G'day all.
Quoting Matthew Brecknell
In other words, the irrefutability of a pattern match does not distribute inside the top-level data constructor of the pattern.
I wasn't disagreeing with you, which is why I didn't comment. Note also that if Haskell prime incorporates bang patterns, and hence "strict lets", this will make the situation even more subtle. Cheers, Andrew Bromage

On 05/02/07, TJ
I went through the entry on laziness on the wikipedia wikibook. Very nice. The wikibook sure has grown a lot since I last visited.
Great! I was about to suggest this! This is one of our not-really-complete-but-getting-there articles; it's more a jumble of unlinked sections than a full flowing article but it still contains a lot of useful stuff. -- -David House, dmhouse@gmail.com

I found it useful to work through an example where lazy evaluation was
important, and wrote it up in a tutorial. It may or may not help you,
no guarantees, but here it is:
http://www.haskell.org/haskellwiki/Haskell/Lazy_Evaluation
Any comments are welcome!
Andrew
On 2/4/07, TJ
I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console. Quite obviously that's not it. So how is laziness defined in Haskell?
I remember vaguely someone saying that pattern matching on a value forces it to be evaluated. Is that right? What else?
This is one of the things that just boggles my mind everytime I try to wrap it around this thing called Haskell ;)
Cheers,
TJ _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console.
In many cases, that's exactly what it's like.
Quite obviously that's not it. So how is laziness defined in Haskell?
In fact, Haskell is not defined as lazy, it is defined as non-strict.
From what I understand, non-strict semantics are (very informally) about choosing an order of evaluation that, where possible, avoids non-termination or error. Laziness is about evaluating only what's needed. In any case, I think all of the mainstream Haskell compilers and interpreters implement the non-strict semantics by means of lazy evaluation, so unless you're working on a radical new Haskell implementation, you probably don't need to worry about the difference. There's a wiki stub about this, though it doesn't say much more than what I've just said:
http://haskell.org/haskellwiki/Lazy_vs._non-strict
I remember vaguely someone saying that pattern matching on a value forces it to be evaluated. Is that right? What else?
Yes, but you need to be more precise about when this pattern matching occurs, what you mean by "it", and the extent to which "it" is evaluated. See below about "weak head normal form".
This is one of the things that just boggles my mind everytime I try to wrap it around this thing called Haskell ;)
If it's any comfort, you're not alone. It takes some discipline to forget your previous notions of computation, and to start thinking lazily. Get familiar with the notion of the "thunk": a record which the implementation stores on the heap in place of the result of some computation which has not yet been performed. A thunk typically records a reference to a function and some arguments, but remember that the arguments (and indeed, the function) in all likelihood haven't been evaluated yet, so they might well be thunks too. A thunk sits on the heap until either the garbage collector decides it's no longer needed (in which case the computation is never performed), or until the implementation finds it needs the value (or part of it). Also get familiar with "weak head normal form" (WHNF). I'm not 100% on this, so hopefully someone will correct me if I'm wrong. Let's say the implementation gets to a point where it needs to perform a pattern match on a value. So far, the value is just an unevaluated thunk on the heap. To perform the pattern match, the implementation needs to evaluate that thunk to its top-level data constructor. It doesn't need to go any further than that just yet (unless the pattern also includes a sub-pattern inside the top-level data constructor). So a value that is evaluated to its top-level data constructor is said to be in "weak head normal form". The result of the pattern match (aside from allowing the implementation to choose between alternative patterns based on the data constructor) is typically that one or more variables are bound to values inside the data constructor. Of course, those values are most likely unevaluated thunks, at least until further pattern matching is necessary. Hopefully, that's clear enough to be of some use.
participants (6)
-
ajb@spamcop.net
-
Andrew Wagner
-
David House
-
dons@cse.unsw.edu.au
-
Matthew Brecknell
-
TJ