
G'day all.
Quoting Daniel Fischer
facr will build a thunk of the form 1 * (2 * (3 * (4 * (... * (n * 1) ...)))), [...] facl will build a thunk of the form (...(((1 * 1) * 2) * 3) * ...) * n, which will also blow the stack if n is large enough. [...] You can get that behaviour without relying on the optimiser by using the strict left fold
foldl'
from Data.List, which forces evaluation at each step.
There's one more possibility you should be aware of, assume you're trying to compute large factorials, and that's to use a binary tree-style recursion pattern. This one is bottom-up, but you could also do top-down: << {-# SPECIALIZE binaryProduct :: [Integer] -> Integer #-} binaryProduct :: (Num a) => [a] -> a binaryProduct [] = 1 binaryProduct [x] = x binaryProduct xs = binaryProduct (bip' xs) where bip' (x1:x2:xs) = x1*x2 : bip' xs bip' xs = xs fac1 :: Integer -> Integer fac1 n = foldl' (*) 1 [1..n] fac2 :: Integer -> Integer fac2 n = binaryProduct [1..n]
The speed difference for large n is remarkable: << Factorial> fac1 100000 == fac2 100000 True Factorial> :set +s Factorial> fac1 100000 `seq` () () (6.27 secs, 9304407772 bytes) Factorial> fac2 100000 `seq` () () (0.31 secs, 20527124 bytes)
(The `seq` () idiom, by the way, forces computation of the result without incurring the expense of calling "show". The "show" function on Integers is quite slow for very large numbers, and the factorial of 100,000 is a very large number.) As you can probably guess, the speedup is not due to lazy evaluation. A good exercise for bright students who know something about computer arithmetic: Why is the binary tree-style version so much faster? Cheers, Andrew Bromage