Understanding Haskell fcn calls

When I learn a new language, I like to look at the assembly code for some simple fcn calls. I tried GHC -S on the following program: add' x y = x + y main = do print $ add' 1 2 The result of ghc -S add.hs was 301 lines of assembly code. Adding an explicit type annotation for add' brought it down to 252 lines, and the code was still beyond my meager ability to take in assembly. Is there a good reference that gives a simple explanation of how Haskell function calls work? This seems crucial to understanding how to efficiently use the language, given what I've read about tail call optimization. thanks Lee Short

On Wednesday 16 February 2011 23:49:51, blackcat@pro-ns.net wrote:
Is there a good reference that gives a simple explanation of how Haskell function calls work?
I don't know one.
This seems crucial to understanding how to efficiently use the language,
No, fortunately it's not necessary. To use the language efficiently, you must understand (not necessarily completely, though) laziness, where that's a Good Thing™ and where bad.
given what I've read about tail call optimization.
Which is far less important in Haskell than in strict languages. Due to laziness, a recursive (but not tail-recursive) function can deliver partial results before entering a recursive call, which often is better than tail-recursion. Consider map :: (a -> b) -> [a] -> [b] map f (x:xs) = f x : map f xs map _ [] = [] When you call map f on a nonempty list (x:xs), you immediately get the result, a cons cell with two children, one is the thunk (how to calculate f x), the other is the thunk (how to calculate map f xs). Thus, when you consume the result sequentially, the computation can run in constant space, each list element can be garbage collected as soon as it is consumed, before the next element comes alive (of course, ordinarily the garbage collector will not run that frequently, so there'll be a handful of values lingering after being consumed). With a tail-recursive function, the entire result has to be in memory at once, since it can only be returned after the computation is complete. Tail-recursion is however good if no partial results are possible, as for the sum of a list of Ints (but then you need to make sure that the accumulator is sufficiently evaluated or you'll just create a huge thunk which eats your stack when it is finally evaluated). As a rule of thumb, tail-recursion is for strict stuff, not for lazy things. There are lots of lazy things in Haskell, so tail-recursion plays a comparatively small role.
participants (2)
-
blackcat@pro-ns.net
-
Daniel Fischer