Newbie question: Where is StackOverflow on the Wiki?

When reading an article about tail recursion (http://themechanicalbride.blogspot.com/2007/04/haskell-for-c-3-programmers. html) I came across the follow statements: "If you can write a non-recursive function that uses the colon syntax it is probably better than a tail recursive one that doesn't. This is because Haskell's lazy evaluation enabled you to use the non-tail recursive version on an infinite stream without getting a stack overflow. " and ""Unfortunately", laziness "gets in the way". While transforming non-tail-recursive code to a tail-recursive form is important and useful for functional programming in general, dealing with laziness requires a little more care, and often "non-tail-recursive" versions are preferrable. flatten is an example of this, the first version is better in many ways. While I don't believe it happens in this case, oftentimes naively writing code "tail-recursively" in Haskell will actually -make- it overflow the stack. Another (actual) benefit of the first version of flatten is that it will work on infinite lists. http://www.haskell.org/hawiki/StackOverflow gives a simple example and some explanation." Unfortunately I can't find the StackOverflow page anymore. Now if I understand this correctly, this just means that when writing something like: foo n = if n<0 then [] else n : foo (n-1) bar n = aux 0 [] where aux i xs = if i>n then xs else aux (i+1) (i:xs) that foo is more efficient than bar because lazy evaluation of foo just puts the delayed computation in the "cdr" of the list, while lazy evaluation of bar has to keep track of all aux calls (the "closures") which gives much more overhead, maybe even stack overflow? Something like that? Thanks, Peter

On Sat, 2007-08-18 at 20:35 +0200, Peter Verswyvelen wrote:
When reading an article about tail recursion (http://themechanicalbride.blogspot.com/2007/04/haskell-for-c-3-programmers. html) I came across the follow statements:
"If you can write a non-recursive function that uses the colon syntax it is probably better than a tail recursive one that doesn't. This is because Haskell's lazy evaluation enabled you to use the non-tail recursive version on an infinite stream without getting a stack overflow. "
and
""Unfortunately", laziness "gets in the way". While transforming non-tail-recursive code to a tail-recursive form is important and useful for functional programming in general, dealing with laziness requires a little more care, and often "non-tail-recursive" versions are preferrable. flatten is an example of this, the first version is better in many ways. While I don't believe it happens in this case, oftentimes naively writing code "tail-recursively" in Haskell will actually -make- it overflow the stack. Another (actual) benefit of the first version of flatten is that it will work on infinite lists. http://www.haskell.org/hawiki/StackOverflow gives a simple example and some explanation."
That page was migrated here: http://www.haskell.org/haskellwiki/Stack_overflow

Thanks. I got confused because the StackOverflow link on http://www.haskell.org/haskellwiki/HaWiki_migration is dead. -----Original Message----- From: Derek Elkins [mailto:derek.a.elkins@gmail.com] Sent: Saturday, August 18, 2007 8:54 PM To: Peter Verswyvelen Cc: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Newbie question: Where is StackOverflow on the Wiki? On Sat, 2007-08-18 at 20:35 +0200, Peter Verswyvelen wrote:
When reading an article about tail recursion
(http://themechanicalbride.blogspot.com/2007/04/haskell-for-c-3-programmers.
html) I came across the follow statements:
"If you can write a non-recursive function that uses the colon syntax it is probably better than a tail recursive one that doesn't. This is because Haskell's lazy evaluation enabled you to use the non-tail recursive version on an infinite stream without getting a stack overflow. "
and
""Unfortunately", laziness "gets in the way". While transforming non-tail-recursive code to a tail-recursive form is important and useful for functional programming in general, dealing with laziness requires a little more care, and often "non-tail-recursive" versions are preferrable. flatten is an example of this, the first version is better in many ways. While I don't believe it happens in this case, oftentimes naively writing code "tail-recursively" in Haskell will actually -make- it overflow the stack. Another (actual) benefit of the first version of flatten is that it will work on infinite lists. http://www.haskell.org/hawiki/StackOverflow gives a simple example and some explanation."
That page was migrated here: http://www.haskell.org/haskellwiki/Stack_overflow

foo n = if n<0 then [] else n : foo (n-1)
bar n = aux 0 [] where aux i xs = if i>n then xs else aux (i+1) (i:xs)
that foo is more efficient than bar because lazy evaluation of foo just puts the delayed computation in the "cdr" of the list, while lazy evaluation of bar has to keep track of all aux calls (the "closures") which gives much more overhead, maybe even stack overflow? Something like that?
There is absolutely no problem with bar, it will not stack overflow since it _is_ tail-recursive _and_ the comparison i > n force the evaluation of i avoiding the risk of constructing a too big thunk for the first parameter of aux which could bring a stack overflow like in this example : nonStrictLength n [] = n nonStrictLength n (_:xs) = nonStrictLength (n+1) xs (try nonStrictLength 0 [1..10000000] in GHCi to see the stack overflow, GHC strictness analysis would avoid the problem with -O) Though foo is still more interesting than bar since it will work on infinite list, bar is too strict. -- Jedaï

Not really more efficient but plays to the language implementation's strengths. Imagine take 10 $ foo (10^9) and take 10 $ bar (10^9) bar wouldn't evaluate until the 10^9 was done. (And I just ground my laptop to a halt checking that. :) foo on the other hand would run out to 10^6 and then conveniently finish the rest of your program waiting for the need of the other 10^9-10 values. If you *always* needed the result of the 10^9 calculations then tail-recursion should be better since you won't be holding onto the evaluation frames. -ljr Peter Verswyvelen wrote:
Now if I understand this correctly, this just means that when writing something like:
foo n = if n<0 then [] else n : foo (n-1)
bar n = aux 0 [] where aux i xs = if i>n then xs else aux (i+1) (i:xs)
that foo is more efficient than bar because lazy evaluation of foo just puts the delayed computation in the "cdr" of the list, while lazy evaluation of bar has to keep track of all aux calls (the "closures") which gives much more overhead, maybe even stack overflow? Something like that?
Thanks, Peter
--
Lanny Ripple

Lanny Ripple wrote:
Not really more efficient but plays to the language implementation's strengths.
Imagine
take 10 $ foo (10^9)
and
take 10 $ bar (10^9)
bar wouldn't evaluate until the 10^9 was done. (And I just ground my laptop to a halt checking that. :) foo on the other hand would run out to 10^6 and then conveniently finish the rest of your program waiting
s/10^6/10/ That's what I get for not proof-reading after making a change after the first proof-read.
for the need of the other 10^9-10 values. If you *always* needed the result of the 10^9 calculations then tail-recursion should be better since you won't be holding onto the evaluation frames.
-ljr
Peter Verswyvelen wrote:
Now if I understand this correctly, this just means that when writing something like:
foo n = if n<0 then [] else n : foo (n-1)
bar n = aux 0 [] where aux i xs = if i>n then xs else aux (i+1) (i:xs)
that foo is more efficient than bar because lazy evaluation of foo just puts the delayed computation in the "cdr" of the list, while lazy evaluation of bar has to keep track of all aux calls (the "closures") which gives much more overhead, maybe even stack overflow? Something like that? Thanks, Peter
--
Lanny Ripple

On Mon, Aug 20, 2007 at 11:21:01AM -0500, Lanny Ripple wrote:
Not really more efficient but plays to the language implementation's strengths.
Imagine
take 10 $ foo (10^9)
and
take 10 $ bar (10^9)
bar wouldn't evaluate until the 10^9 was done. (And I just ground my laptop to a halt checking that. :) foo on the other hand would run out to 10^6 and then conveniently finish the rest of your program waiting for the need of the other 10^9-10 values. If you *always* needed the result of the 10^9 calculations then tail-recursion should be better since you won't be holding onto the evaluation frames.
Even if you did, in the presense of laziness it's not useful to make list producers tail recursive. Consider: sum = sum' 0 sum' k [] = k sum' k (x:xs) = (sum' $! (k+x)) xs enum x y | x >= y = 0 | otherwise = x : enum (x+1) y sum (enum 1 10) => sum' 0 (enum 1 10) => sum' 0 (1 : enum (1+1) 10) => (sum' $! (0+1)) (enum (1+1) 10) => sum' 1 (enum (1+1) 10) => sum' 1 (2 : enum (2+1) 10) => (sum' $! (1+2)) (enum (2+1) 10) => sum' 3 (enum (2+1) 10) => sum' 3 (3 : enum (3+1) 10) => (sum' $! (3+3)) (enum (3+1) 10) => sum' 6 (enum (3+1) 10) => sum' 6 (4 : enum (4+1) 10) => (sum' $! (6+4)) (enum (4+1) 10) => sum' 10 (enum (4+1) 10) => ... sum' 36 (9 : enum (9+1) 10) => (sum' $! (36+9)) (enum (9+1) 10) => sum' 45 (enum (9+1) 10) => sum' 45 [] => 45 (I need to find some way to automate making these trails :) ) It runs in constant space, despite the producer's non-tail-recursion. Stefan

Stefan O'Rear wrote:
sum = sum' 0 sum' k [] = k sum' k (x:xs) = (sum' $! (k+x)) xs
enum x y | x >= y = 0 | otherwise = x : enum (x+1) y
sum (enum 1 10) => sum' 0 (enum 1 10) => sum' 0 (1 : enum (1+1) 10) => (sum' $! (0+1)) (enum (1+1) 10) => sum' 1 (enum (1+1) 10) =>
sum' 1 (2 : enum (2+1) 10) => (sum' $! (1+2)) (enum (2+1) 10) => sum' 3 (enum (2+1) 10) =>
sum' 3 (3 : enum (3+1) 10) => (sum' $! (3+3)) (enum (3+1) 10) => sum' 6 (enum (3+1) 10) =>
sum' 6 (4 : enum (4+1) 10) => (sum' $! (6+4)) (enum (4+1) 10) => sum' 10 (enum (4+1) 10) =>
...
sum' 36 (9 : enum (9+1) 10) => (sum' $! (36+9)) (enum (9+1) 10) => sum' 45 (enum (9+1) 10) => sum' 45 [] => 45
(I need to find some way to automate making these trails :) )
I did have a fairly small Tcl implementation for this... I don't have the code now, and I wrote it early in my Haskell career, so there's masses of stuff it didn't handle. (*cough* type classes) Actually, I've often longed for some tool (maybe even integrated into Lambdabot) to show the reduction sequence of an arbitrary expression. But none exists, AFAIK...

Stefan O'Rear wrote:
sum (enum 1 10) => sum' 0 (enum 1 10) => ...
sum' 36 (9 : enum (9+1) 10) => (sum' $! (36+9)) (enum (9+1) 10) => sum' 45 (enum (9+1) 10) => sum' 45 [] => 45
(I need to find some way to automate making these trails :) )
Yes! We'd need such an automatic tool for the wikibook, too. Regards, apfelmus

Hi
sum (enum 1 10) => sum' 0 (enum 1 10) => ...
sum' 36 (9 : enum (9+1) 10) => (sum' $! (36+9)) (enum (9+1) 10) => sum' 45 (enum (9+1) 10) => sum' 45 [] => 45
(I need to find some way to automate making these trails :) )
Yes! We'd need such an automatic tool for the wikibook, too.
The problem is that Haskell is ridiculously complex, and the "small step" interpretation is much harder than you'd think. For example, sum may well be defined as foldl' (+) 0, which is a CAF, so gets reduced once. The 0 won't actually be a 0, but will be fromInteger 0, which will correspond to looking up an item in the dictionary and applying it. Dictionaries especially make the "simple" interpretation completely wrong. It's easy to do informally, but once you start being more precise, its very complex. Thanks Neil

Neil Mitchell wrote:
Hi
(I need to find some way to automate making these trails :) )
Yes! We'd need such an automatic tool for the wikibook, too.
The problem is that Haskell is ridiculously complex, and the "small step" interpretation is much harder than you'd think. For example, sum may well be defined as foldl' (+) 0, which is a CAF, so gets reduced once. The 0 won't actually be a 0, but will be fromInteger 0, which will correspond to looking up an item in the dictionary and applying it. Dictionaries especially make the "simple" interpretation completely wrong.
It's easy to do informally, but once you start being more precise, its very complex.
Like I said, I made a tool in Tcl that works. If you program in partially-parsed Haskell by hand first for all the functions it calls. (Except a few basic math ops.) Indeed, it was by playing with this tool that I first discovered why foldl' needs to exist! ;-) So, making a tool that you can set up to quickly generate an automated trace is quite easy. If you want a tool where you can just casually toss arbitrary Haskell at it and expect sensible answers... hmm... that's going to be kinda tricky. (!) (I had a go at it myself, several times. Each time I was tripped over by being unable to correctly parse arbitrary Haskell code. I never even got to writing the execution engine...!) I think a lot of people will agree that if such a tool existed it could be a *tremendous* help in many, many ways - a tool for experimenting and teaching, finding out why your really-complicated-function behaves wrong, checking out strictness properties, etc. But somebody has to write it first. It's ironic really; Haskell *looks* so easy to single-step. ;-)

Neil Mitchell wrote:
sum (enum 1 10) => sum' 0 (enum 1 10) => ...
sum' 36 (9 : enum (9+1) 10) => (sum' $! (36+9)) (enum (9+1) 10) => sum' 45 (enum (9+1) 10) => sum' 45 [] => 45
(I need to find some way to automate making these trails :) ) Yes! We'd need such an automatic tool for the wikibook, too.
The problem is that Haskell is ridiculously complex, and the "small step" interpretation is much harder than you'd think. For example, sum may well be defined as foldl' (+) 0, which is a CAF, so gets reduced once. The 0 won't actually be a 0, but will be fromInteger 0, which will correspond to looking up an item in the dictionary and applying it. Dictionaries especially make the "simple" interpretation completely wrong.
It's easy to do informally, but once you start being more precise, its very complex.
Yeah, the precise details may vary, even :) But for teaching, an automatic tool that does graph reduction would be great. I don't mind if it's sloppy (directly apply definitions & pattern matching VS everything is a lambda abstraction) and only does simply typed lambda calculus (no type applications, no type classes). Regards, apfelmus

apfelmus wrote:
Yeah, the precise details may vary, even :) But for teaching, an automatic tool that does graph reduction would be great. I don't mind if it's sloppy (directly apply definitions & pattern matching VS everything is a lambda abstraction) and only does simply typed lambda calculus (no type applications, no type classes).
Well come ON people, there's *got* to be enough big-wigs on this list to put *something* together! ;-) Like I said, a while back I wrote something in Tcl that would produce a reduction sequence for the standard example code for quicksort or the Fibonacci numbers. All very simplistic. (I wrote it in *Tcl*! It doesn't even have *types*!!) No type checking. Use any constructor names you want. Typeclasses could *never* have worked. Also, I later learned about a whole bunch of Haskell features that would have required a massive refactor to implement. (*cough* curried functions, case-expressions, let-expressions, lambda functions...) Anyway, I'll see if I can't put something skeletal together sometime. It will probably only work 50% of the time, but maybe that will be useful to somebody...

Hi
Yeah, the precise details may vary, even :) But for teaching, an automatic tool that does graph reduction would be great. I don't mind if it's sloppy (directly apply definitions & pattern matching VS everything is a lambda abstraction) and only does simply typed lambda calculus (no type applications, no type classes).
Well come ON people, there's *got* to be enough big-wigs on this list to put *something* together! ;-)
It's been done, but never got to a released state: http://haskell.org/haskellwiki/Haskell_Equational_Reasoning_Assistant http://www.cs.york.ac.uk/fp/darcs/proof/ (screenshot: http://www-users.cs.york.ac.uk/~ndm/temp/proof.png) And neither of them will be of much use to a beginner. Haskell is too complex to reason about formally for a beginner, and reasoning "informally" isn't very easy to do - because its not a precise description of what to do. Thanks Neil

On 8/20/07, Stefan O'Rear
... (I need to find some way to automate making these trails :) ) ...
I think you can come a long way with the debugger in GHC HEAD. It provides a :trace command that, when applied to an expression with some breakpoint in it, remembers the history of evaluation steps. You can view the history with :hist. See: http://www.haskell.org/ghc/dist/current/docs/users_guide/ghci-debugger.html#... regards, Bas
participants (9)
-
Andrew Coppin
-
apfelmus
-
Bas van Dijk
-
Chaddaï Fouché
-
Derek Elkins
-
Lanny Ripple
-
Neil Mitchell
-
Peter Verswyvelen
-
Stefan O'Rear