Now I think I understand. Thanks for the explanation!

ovidiu

On Fri, Jul 29, 2011 at 12:04 AM, Will Ness <will_n48@yahoo.com> wrote:
... the start of the function, so to speak.
 
If we have (f x = a : f b), the distance between this entry into f, and the next one, has constant length: (a :). If we have (f x = "123" ++ f y), it's constant too, just bigger than 1 - no problem, it will all get consumed in constant number of steps, there's no explosion in number of operations.
 
But if we have (f x = f a ++ f b) e.g., then (f a) will add some, and its calls will get some, and it all grows out of control - and all this needs to be maintained by the system, to get to the final (f b). Not nice.
 
But anyway I'm no expert; it's just how I explain it to myself, from Prolog's "tail recursion modulo cons" perspective. We don't need to wait for a function to return to prepend something onto its result; we prepent this to a yet-to-be-filled list, and pass that stump into the tail-recursive function. Presto, all it fine. Same in Haskell, no second function call will get actually called until what's before it gets consumed - so there's no proble if that's constant in size. But I repeat myself. :)
 
Cheers,
  Will


--- On Thu, 7/28/11, Ovidiu Deac <ovidiudeac@gmail.com> wrote:

From: Ovidiu Deac <ovidiudeac@gmail.com>
Subject: Re: [Haskell-beginners] lazyness and tail recursion
To: "Will Ness" <will_n48@yahoo.com>
Cc: beginners@haskell.org
Date: Thursday, July 28, 2011, 10:27 AM


If I got it right, the idea is to keep the number of operations
between the recursive call and the end of the function constant.

On Wed, Jul 27, 2011 at 1:38 AM, Will Ness <will_n48@yahoo.com> wrote:
> Ovidiu Deac <ovidiudeac <at> gmail.com> writes:
>
>>
>> I've read the paragraph below from Real World Haskell
>>
> (http://book.realworldhaskell.org/read/efficient-file-processing-regular-expressions-and-file-name-matching.html
>> section "An important aside: writing lazy functions")
>>
>> I can get a feeling why lazyness compensates for the lack of tail
>> recursion but I don't really understand why it would work in general.
>>
>> My understanding is that that a function which returns a list with
>> non-tail recursion would only work this way with certain usage
>> patterns. In this case the caller of the funtion will have to use the
>> string in a stream-like fashion.
>
> The concept of tail recursion modulo cons is similar. Wikipedia has an article.
> Basically it's about keeping one execution frame above the tail-recursive one.
> Lazily accessing a list from the top is a lot like adding elements to it one by
> one at the bottom. Prefixing a recursive call with an operation of fixed number
> of operations is OK, because they will get consumed and the recursive case
> called, in a fixed number of steps - the instance between current point and next
> recursive invocation point won't grow, only get increased at times by a set
> amount and then get consumed.
>
> The problem with non-tail recursion in lazy setting is that the distance grows
> between the current point and the next invocation, and so the amount of "what to
> do next?" information grows all the time. In imperative setting it gets flipped
> into "what to do after the invocation" but it still grows. In tail-recursion
> (even modulo cons, or (++) with fixed prefix) it's bounded, finite, constant.
> Looking at Scheme example code in WP "continuation-passing style" article might
> help. There we see as in translations of tail-recursive functions the
> constructed continuation does not grow in unlimited fashion, instead only maybe
> getting prefixed by a fixed amount of operations at times.
>
> Does it make any sense? :)
>
>
> _______________________________________________
> Beginners mailing list
> Beginners@haskell.org
> http://www.haskell.org/mailman/listinfo/beginners
>