
Malcolm Wallace wrote:
Anthony Chaumas-Pellet
wrote: Why is tail recursion a bad thing for a finite function? Is tail recursion simply not the most common Haskell idiom, or is there some technical reason I fail to see?
Tail recursion tends to create space-leaks, which in turn hurt time performance.
That's only part of the truth. Take for example the ordinary version of and :: (a -> Bool) -> [a] -> Bool and p [] = False and p (x:xs) = p x && and xs and it's tail-recursive cousin and' p [] b = b and' p (x:xs) b = and' p xs (b && p x) Apparently, the function is True if there is some element in the list that fulfills the condition p. Clearly, we can stop traversing the list when we find the first element that satisfies p. Now, the tail-recursive version is doomed to traverse the entire list, but thanks to lazy evaluation, the ordinary version stops as soon as an element x with p x is found. Of course, one could alter the definition of and' to achieve early stopping and' _ _ True = True and' p (x:xs) False = and' p xs (p x) and' _ [] False = False but this is not advisable: we unfolded the definition of (&&). In the ordinary case however, (&&) already contains all the magic of early stopping, the recursion scheme has nothing to do with it. Thus, the most common Haskell idiom about tail recursion is to not think about it (and hence not use it). Instead, return values as early as possible (in some cases (&&) can return a definite answer by looking at the first argument only). Note that this is very different from strict functional languages. Regards, apfelmus