
Here's the example program: https://gist.github.com/1cbe113d2c79e2fc9d2b When I run the program (which maintains a list inside an STM TVar), I get the following statistics:
./Test +RTS -s 176,041,728 bytes allocated in the heap 386,794,976 bytes copied during GC 69,180,224 bytes maximum residency (7 sample(s)) 42,651,080 bytes maximum slop 179 MB total memory in use (0 MB lost due to fragmentation)
Tot time (elapsed) Avg pause Max pause Gen 0 336 colls, 0 par 0.44s 0.44s 0.0013s 0.0033s Gen 1 7 colls, 0 par 0.39s 0.40s 0.0570s 0.1968s INIT time 0.00s ( 0.00s elapsed) MUT time 0.23s ( 0.23s elapsed) GC time 0.83s ( 0.84s elapsed) EXIT time 0.00s ( 0.00s elapsed) Total time 1.06s ( 1.07s elapsed) %GC time 77.9% (78.3% elapsed) Alloc rate 749,153,093 bytes per MUT second Productivity 22.1% of total user, 22.0% of total elapsed How come this program uses 179 MB of memory? I'm on a 64-bit machine where 2'000'000 integers uses 32 MB. Where does the overhead come from? -- Johan Brinch