Yes, the goal isn't so much to improve complexity (both are O(1)) but to reduce the constant factor on that O(1).

In an inner loop like that, allocator/gc calls by far dominate the cost of the program.  If you can remove them, you've improved the performance of the program by 10-100x.

In the case where everything is Int, you can even unbox and get entirely in registers, which gives you comparable performance to a hand-tuned C or assembly language loop.

  -- ryan

On Tue, May 17, 2011 at 10:55 PM, Roman Cheplyaka <roma@ro-che.info> wrote:
If one thinks about Haskell data structures as of ordinary data
structures, fusion seems a great deal -- instead of producing
intermediate lists and possibly running out of memory, we just run a
loop and use constant amount of space.

But Haskell data structures are quite different -- they are produced as
demanded. Consider the example from the Stream Fusion paper[1]:

   f :: Int → Int
   f n = sum [ k ∗ m | k ← [1..n], m ← [1..k ] ]

Assuming the sum is a strict left fold, it consumes elements of lists
one-by-one and runs in constant space.

The list part can be transformed to

   foldr (++) [] $ map (\k -> map (\m -> k*m) [1..k]) [1..n]

which is capable of producing elements one-by-one. So the whole thing
probably should run in constant space as well.

Of course I don't claim that fusion is useless -- just trying to
understand the problem it solves. Are we saving a few closures and cons
cells here?

[1] Stream Fusion. From Lists to Streams to Nothing at All.
   Duncan Coutts, Roman Leshchinskiy, Don Stewart.

--
Roman I. Cheplyaka :: http://ro-che.info/
Don't worry what people think, they don't do it very often.

_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe