
On 02/03/2010 14:11, Malcolm Wallace wrote:
Both concurrent GC and incremental GC tend to add overheads to the mutator, because they need a read barrier. There was an incremental GC for GHC once [1], taking advantage of the built-in read barrier that we have whereby most closures are "entered"
Was there a specific reason why that GC implementation chose to use a read barrier rather than a write barrier? I would have thought that in general, a write barrier is cheaper to implement. Doesn't ghc update fewer thunks than it enters?
If the GC wants to move objects, then a read barrier is needed in order to figure out whether you're looking at an object that has moved or not. If you don't move objects, then you can get away with only a write barrier - the write barrier is needed so that you can remember which objects or fields might need to be re-scanned because they were mutated. The choice made in the Non-stop Haskell paper was to take advantage of the existing read barrier so that we could move objects. Some schemes copy objects rather than move them, and then you can get away without a read barrier, but then your write barrier has to keep the two copies in sync. Tradeoffs are everywhere, GC is a tricky business! Cheers, Simon