Re: [Haskell-beginners] lazyness and tail recursion

Now I think I understand. Thanks for the explanation!
ovidiu
On Fri, Jul 29, 2011 at 12:04 AM, Will Ness
... the start of the function, so to speak.
If we have (f x = a : f b), the distance between this entry into f, and the next one, has constant length: (a :). If we have (f x = "123" ++ f y), it's constant too, just bigger than 1 - no problem, it will all get consumed in constant number of steps, there's no explosion in number of operations.
But if we have (f x = f a ++ f b) e.g., then (f a) will add some, and its calls will get some, and it all grows out of control - and all this needs to be maintained by the system, to get to the final (f b). Not nice.
But anyway I'm no expert; it's just how I explain it to myself, from Prolog's "tail recursion modulo cons" perspective. We don't need to wait for a function to return to prepend something onto its result; we prepent this to a yet-to-be-filled list, and pass that stump into the tail-recursive function. Presto, all it fine. Same in Haskell, no second function call will get actually called until what's before it gets consumed - so there's no proble if that's constant in size. But I repeat myself. :)
Cheers, Will
--- On *Thu, 7/28/11, Ovidiu Deac
* wrote: From: Ovidiu Deac
Subject: Re: [Haskell-beginners] lazyness and tail recursion To: "Will Ness" Cc: beginners@haskell.org Date: Thursday, July 28, 2011, 10:27 AM If I got it right, the idea is to keep the number of operations between the recursive call and the end of the function constant.
Ovidiu Deac
writes: I've read the paragraph below from Real World Haskell
( http://book.realworldhaskell.org/read/efficient-file-processing-regular-expr...
section "An important aside: writing lazy functions")
I can get a feeling why lazyness compensates for the lack of tail recursion but I don't really understand why it would work in general.
My understanding is that that a function which returns a list with non-tail recursion would only work this way with certain usage patterns. In this case the caller of the funtion will have to use the string in a stream-like fashion.
The concept of tail recursion modulo cons is similar. Wikipedia has an
On Wed, Jul 27, 2011 at 1:38 AM, Will Ness
http://us.mc1145.mail.yahoo.com/mc/compose?to=will_n48@yahoo.com> wrote: article. Basically it's about keeping one execution frame above the tail-recursive one. Lazily accessing a list from the top is a lot like adding elements to it one by one at the bottom. Prefixing a recursive call with an operation of fixed number of operations is OK, because they will get consumed and the recursive case called, in a fixed number of steps - the instance between current point and next recursive invocation point won't grow, only get increased at times by a set amount and then get consumed.
The problem with non-tail recursion in lazy setting is that the distance grows between the current point and the next invocation, and so the amount of "what to do next?" information grows all the time. In imperative setting it gets flipped into "what to do after the invocation" but it still grows. In tail-recursion (even modulo cons, or (++) with fixed prefix) it's bounded, finite, constant. Looking at Scheme example code in WP "continuation-passing style" article might help. There we see as in translations of tail-recursive functions the constructed continuation does not grow in unlimited fashion, instead only maybe getting prefixed by a fixed amount of operations at times.
Does it make any sense? :)
_______________________________________________ Beginners mailing list Beginners@haskell.orghttp://us.mc1145.mail.yahoo.com/mc/compose?to=Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners
participants (1)
-
Ovidiu Deac