
Indeed, though I don't think this is the case, because I get lots of lag even when no logs are written. In the part you deleted I mentioned one source of lag that does not disappear when no logs are written, and a way of using profiling cost centers to track down other sources (the ones I mentioned accounted for 2/3 of lag in the "profile_logging" profile, according to "-hblag -hc -L40" on SCC-annotated source).
You mean !s on the intermediate numbers?
No, I wrote about your 'm' parameter to 'runprof', which gets passed all the way through the recursion tree of run_lots before it finally gets used at the leaf level. If I recall correctly, I put one SCC on the log constructors and another on the actual parameter passed to runprof. The comments on strictness were less about lag than about total memory usage - unevaluated computations piling up in logs or recursion parameters are a frequent source of out of memory issues (from dramatic slowdown to termination). Even for your simple test, that changes the profile (the graphic is scaled, but the numbers matter).
I could have sworn I tried that but no luck. Thanks for reminding me about manual SCC pragmas, somehow I totally forgot you could add your own. Just out of curiosity, what affect could "return $! 1" have? A constant should never be a thunk, so 'seq' on it should have no effect, right?
I was surprised by that myself. In principle, numeric constants translate to calls to fromInteger (Haskell 98 report, 3.2), and can return anything with a Num instance (such as the literal '1' translating into an infinite list of ones), so they can't simply be pre-evaluated. But you did specify the type to be Int, so the compiler could have avoided the thunk (one would need to look at GHC's output to check). Then again, we were looking at lag, not at heap size, and those 1s (small, but lots of them) were built before they are used.
Yes, you are right of course, and thank you for the help. It's just frustrating when every step toward simpler instead brings new problems out of the woodwork... but I suppose with enough experience I can begin to understand those too. I will resume my testing when I get some time again!
Understood. You don't have to understand everything at once, and simplified test setups often have issues that don't occur in the real application. But analyzing real applications needs some intuition about what might be going on, and that is easier to acquire in simple test setups. There, you can apply the old Holmes maxime of debugging: once you've excluded everything that can't be the problem, what is left, however unlikely or hard to understand, has to be it (with apologies to Doyle;-). Claus