
On 06/08/2014 21:40, Sergei Trofimovich wrote:
I think I know what happens. According to perf the benchmark spends 34%+ of time in garbage collection ('perf record -- $args'/'perf report'):
27,91% test test [.] evacuate 9,29% test test [.] s9Lz_info 7,46% test test [.] scavenge_block
And the whole benchmark runs a tiny bit more than 300ms. It is exactly in line with major GC timer (0.3s).
0.3s is the *idle* GC timer, it has no effect when the program is running normally. There's no timed GC or anything like that. It sometimes happens that a tiny change somewhere tips a program over into doing one more major GC, though.
If we run $ time ./test inverter 345 10n 4u 1>/dev/null multiple times there is heavy instability in there (with my patch reverted): real 0m0.319s real 0m0.305s real 0m0.307s real 0m0.373s real 0m0.381s which is +/- 80ms drift!
Let's try to kick major GC earlier instead of running right at runtime shutdown time: $ time ./test inverter 345 10n 4u +RTS -I0.1 1>/dev/null
real 0m0.304s real 0m0.308s real 0m0.302s real 0m0.304s real 0m0.308s real 0m0.306s real 0m0.305s real 0m0.312s which is way more stable behaviour.
Thus my theory is that my changed stepped from "90% of time 1 GC run per run" to "90% of time 2 GC runs per run"
Is this program idle? I have no idea why this might be happening! If the program is busy computing stuff, the idle GC should not be firing. If it is, that's a bug. Cheers, Simon