
G'day all.
Quoting TJ
I would think that with 100% laziness, nothing would happen until the Haskell program needed to output data to, e.g. the console. Quite obviously that's not it. So how is laziness defined in Haskell?
It means that the program behaves as if "things" are evaluated if and only if they are needed. "Needed" in the Haskell sense, means "needed to do I/O". It does not, of course, guarantee the order of evaluations, merely that the program acts as if it will only be evaluated if it's needed. It also doesn't guarantee that unneeded evaluation won't take place, it just means that if it does, it will happen in such a way that it won't destroy the illusion. "Full laziness", which Haskell does not guarantee but does allow, goes one step further: A "thing" will be evaluated AT MOST once if it's ever needed.
I remember vaguely someone saying that pattern matching on a value forces it to be evaluated. Is that right? What else?
Remember that the pattern match code itself will only be evaluated if it needs to be for some other reason, which eventually boils down to I/O. (Note: We're assuming the absence of seq, which confuses everything.)
This is one of the things that just boggles my mind everytime I try to wrap it around this thing called Haskell ;)
The cool part is that for the most part, it doesn't matter. It just works. If you ever come across a case where it doesn't just work (e.g. if you need a tilde pattern), you'll be ready for it. Cheers, Andrew Bromage