
I'm putting together a script to gather run-time stats for some functions I'm working with, and I'm having a terrible time. My strategy is to evaluate a function a number of times and compute the difference between the elapsed CPU time before and after the repeated calls.
timeNReps :: (a -> b) -> a -> Int -> FilePath -> IO () timeNReps func arg reps fileName = do t0 <- System.CPUTime.getCPUTime runNReps func arg reps t1 <- System.CPUTime.getCPUTime appendFile fileName ((showMS (t1 - t0)) ++ "\n") where showMS n = show (n `quot` 1000000000)
showMS just converts the pico-second result into milli-seconds and stringifies it. runNReps is an IO program (do sequence) that is intended to call the function and tail-call itself a given number of times:
runNReps :: (Int -> a) -> Int -> Int -> IO () runNReps f x todo | todo > 0 = do let junk = (f x) runNReps f x (todo - 1) | otherwise = return (())
Apparently runNReps doesn't apply f to x at all! I've called my test function with a suitable argument from top level (within ghci) and it takes ~20 sec. wall time to return; when I evaluate "runNReps test arg 1" it returns immediately. When I use this within my timing script I get timing output that indicates that calls for all args between 1 and 50 take about the same (very small) amount of time, but I know, both from theory and experiments in Scheme versions, that my test function's complexity is exponential in its arg. I'm using GHC 6.0.1 under Mandrake 9.1 on a 1.8 GHz Pentium box with 256MB RAM. Any idea where I'm going wrong? -- Bill Wood bill.wood@acm.org