
Bulat Ziganshin
now i see what you mean. no, i mean trivial transformation. #650 says about slow GC. why it's slow? because once you made any update to the array, the entire array is marked as updated and scanned on next minor GC (which occurs after every 512 kbytes allocated, afaik). let's replace big array (say, of 10,000 elements) with array of 100 arrays of 100 elements each. now, between minor GCs only some of arrays will be changed and entire amount of memory to be scanned will become less
I actually tried this, and modified Data.HashTable to use a two-tiered
chunked dynamic array as you suggest. In some limited testing it was
only about 5% faster, so I gave up on it -- you get some GC time back
but you also pay a significant indirection penalty by adding an extra
cache line miss for every operation.
G
--
Gregory Collins