
2008/6/4 apfelmus
[...] But it can waste space (-> "space leak"), for instance by accumulating a big expression like
(..) -> ((..)+1) -> (((..) + 1) + 1) -> etc.
instead of evaluating x+1 immediately
5 -> 6 -> 7 -> etc.
So, it is called a "space leak" because the asymptotic space required is greater in a lazy setting compared to a strict setting. Then, what about: sum . map (\x -> 2 * x + 42) $ [1..9999999] Provided sum works in constant space (needs proper tail calls and the right level of stricness), this runs in constant space in Haskell, deforestation or not. However, the equivalent code in, say, Ocaml, will work in linear space in the absence of deforestation. So, in this case, I find legitimate to talk about a "strictness space leak". Well, is it? Of course it uses linear space, there is no "leak"! hummm. It feels like only lazy evaluation can be accused of space leak, while strict evaluation cannot. Are we biased in favour of strict evaluation? I mean, if we take strict evaluation as the default (because it's mainstream), it is easy to think that lazy evaluation is cool when better, but unacceptable when worse. Hence the word "leak". Just my two cents, Loup