lazyness and tail recursion

I've read the paragraph below from Real World Haskell (http://book.realworldhaskell.org/read/efficient-file-processing-regular-expr... section "An important aside: writing lazy functions") I can get a feeling why lazyness compensates for the lack of tail recursion but I don't really understand why it would work in general. My understanding is that that a function which returns a list with non-tail recursion would only work this way with certain usage patterns. In this case the caller of the funtion will have to use the string in a stream-like fashion. Can someone explain this better? Thanks, Ovidiu PS: Here is the paragraph that I'm talking about: "Looking at the globToRegex' function, we can see that it is not tail recursive. To see why, examine its final clause again (several of its other clauses are structured similarly). No comments -- file: ch08/GlobRegex.hs globToRegex' (c:cs) = escape c ++ globToRegex' cs 1 comment It applies itself recursively, and the result of the recursive application is used as a parameter to the (++) function. Since the recursive application isn't the last thing the function does, globToRegex' is not tail recursive. No comments Why is our definition of this function not tail recursive? The answer lies with Haskell's non-strict evaluation strategy. Before we start talking about that, let's quickly talk about why, in a traditional language, we'd try to avoid this kind of recursive definition. Here is a simpler definition, of the (++) operator. It is recursivem, but not tail recursive. 8 comments -- file: ch08/append.hs (++) :: [a] -> [a] -> [a] (x:xs) ++ ys = x : (xs ++ ys) [] ++ ys = ys No comments In a strict language, if we evaluate "foo" ++ "bar", the entire list is constructed, then returned. Non-strict evaluation defers much of the work until it is needed. No comments If we demand an element of the expression "foo" ++ "bar", the first pattern of the function's definition matches, and we return the expression x : (xs ++ ys). Because the (:) constructor is non-strict, the evaluation of xs ++ ys can be deferred: we generate more elements of the result at whatever rate they are demanded. When we generate more of the result, we will no longer be using x, so the garbage collector can reclaim it. Since we generate elements of the result on demand, and do not hold onto parts that we are done with, the compiler can evaluate our code in constant space."

Ovidiu Deac
I've read the paragraph below from Real World Haskell
(http://book.realworldhaskell.org/read/efficient-file-processing-regular-expr...
section "An important aside: writing lazy functions")
I can get a feeling why lazyness compensates for the lack of tail recursion but I don't really understand why it would work in general.
My understanding is that that a function which returns a list with non-tail recursion would only work this way with certain usage patterns. In this case the caller of the funtion will have to use the string in a stream-like fashion.
The concept of tail recursion modulo cons is similar. Wikipedia has an article. Basically it's about keeping one execution frame above the tail-recursive one. Lazily accessing a list from the top is a lot like adding elements to it one by one at the bottom. Prefixing a recursive call with an operation of fixed number of operations is OK, because they will get consumed and the recursive case called, in a fixed number of steps - the instance between current point and next recursive invocation point won't grow, only get increased at times by a set amount and then get consumed. The problem with non-tail recursion in lazy setting is that the distance grows between the current point and the next invocation, and so the amount of "what to do next?" information grows all the time. In imperative setting it gets flipped into "what to do after the invocation" but it still grows. In tail-recursion (even modulo cons, or (++) with fixed prefix) it's bounded, finite, constant. Looking at Scheme example code in WP "continuation-passing style" article might help. There we see as in translations of tail-recursive functions the constructed continuation does not grow in unlimited fashion, instead only maybe getting prefixed by a fixed amount of operations at times. Does it make any sense? :)

If I got it right, the idea is to keep the number of operations
between the recursive call and the end of the function constant.
On Wed, Jul 27, 2011 at 1:38 AM, Will Ness
Ovidiu Deac
writes: I've read the paragraph below from Real World Haskell
(http://book.realworldhaskell.org/read/efficient-file-processing-regular-expr...
section "An important aside: writing lazy functions")
I can get a feeling why lazyness compensates for the lack of tail recursion but I don't really understand why it would work in general.
My understanding is that that a function which returns a list with non-tail recursion would only work this way with certain usage patterns. In this case the caller of the funtion will have to use the string in a stream-like fashion.
The concept of tail recursion modulo cons is similar. Wikipedia has an article. Basically it's about keeping one execution frame above the tail-recursive one. Lazily accessing a list from the top is a lot like adding elements to it one by one at the bottom. Prefixing a recursive call with an operation of fixed number of operations is OK, because they will get consumed and the recursive case called, in a fixed number of steps - the instance between current point and next recursive invocation point won't grow, only get increased at times by a set amount and then get consumed.
The problem with non-tail recursion in lazy setting is that the distance grows between the current point and the next invocation, and so the amount of "what to do next?" information grows all the time. In imperative setting it gets flipped into "what to do after the invocation" but it still grows. In tail-recursion (even modulo cons, or (++) with fixed prefix) it's bounded, finite, constant. Looking at Scheme example code in WP "continuation-passing style" article might help. There we see as in translations of tail-recursive functions the constructed continuation does not grow in unlimited fashion, instead only maybe getting prefixed by a fixed amount of operations at times.
Does it make any sense? :)
_______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners
participants (2)
-
Ovidiu Deac
-
Will Ness