
On 22/06/2009 23:48, Bulat Ziganshin wrote:
Hello Marcin,
Tuesday, June 23, 2009, 2:31:13 AM, you wrote:
Now this took an odd turn, because the simulation started crashing with out-of-memory errors _after_ completing (during bz2 compression). I'm fairly certain this is a GC/FFI bug, because increasing the max heap didn't help. Moving the bz2 compression to a separate process provided a reasonable solution. What I think is happening is that after the simulation completes, almost all of the available memory (within the -M limit) is filled with garbage. Then I run bzlib which tries to allocate more memory (from behind FFI?) to compress the results, which in turn causes an out-of-memory error instead of triggering a GC collection.
i can propose a quick fix - alloc 10 mb using allocBytes before starting your algorithm, and free it just before starting bzlib. it may help
i agree that this looks like a deficiency of memory allocator. it's better to write at ghc-users maillist (or at least make a copy to Simon Marlow) to attract attention to your message
Maybe bzlib allocates using malloc()? That would not be tracked by GHC's memory management, but could cause OOM. Another problem is that if you ask for a large amount of memory in one go, the request is usually honoured immediately, and then we GC shortly afterward. If this is the problem for you, please submit a ticket and I'll see whether it can be changed. You could work around it by calling System.Mem.performGC just before allocating the memory. Cheers, Simon