
I've come up with three different methods of approach to solve the same problem in haskell. I would like to compare the three in terms of reductions, memory usage, and overall big O complexity.
What's the quickest way to gather these stats? I usually use the ghc compiler, but also have hugs installed. The big O complexity probably has to be done by hand, but maybe there's a tool out there to do it automagically.
Apart from the "normal" ways (profiling, Unix 'time', GHC's +RTS -sstderr), here's another one I've been using recently: cachegrind. It's the wonderful cache profiling extension by Nick Nethercote that comes with Julian Seward's Valgrind. The great thing is that you don't even need to recompile the program - you just do 'cachegrind <program>', and it runs (very slowly) and outputs reliable cache statistics including how many instructions were executed. Get it from http://developer.kde.org/~sewardj/ Oh, it only works on Linux BTW. Cheers, Simon
participants (1)
-
Simon Marlow