
Ovidiu Deac
I've read the paragraph below from Real World Haskell
(http://book.realworldhaskell.org/read/efficient-file-processing-regular-expr...
section "An important aside: writing lazy functions")
I can get a feeling why lazyness compensates for the lack of tail recursion but I don't really understand why it would work in general.
My understanding is that that a function which returns a list with non-tail recursion would only work this way with certain usage patterns. In this case the caller of the funtion will have to use the string in a stream-like fashion.
The concept of tail recursion modulo cons is similar. Wikipedia has an article. Basically it's about keeping one execution frame above the tail-recursive one. Lazily accessing a list from the top is a lot like adding elements to it one by one at the bottom. Prefixing a recursive call with an operation of fixed number of operations is OK, because they will get consumed and the recursive case called, in a fixed number of steps - the instance between current point and next recursive invocation point won't grow, only get increased at times by a set amount and then get consumed. The problem with non-tail recursion in lazy setting is that the distance grows between the current point and the next invocation, and so the amount of "what to do next?" information grows all the time. In imperative setting it gets flipped into "what to do after the invocation" but it still grows. In tail-recursion (even modulo cons, or (++) with fixed prefix) it's bounded, finite, constant. Looking at Scheme example code in WP "continuation-passing style" article might help. There we see as in translations of tail-recursive functions the constructed continuation does not grow in unlimited fashion, instead only maybe getting prefixed by a fixed amount of operations at times. Does it make any sense? :)