
I recently discovered that I'm running into the IORef / garbage collection issue described in the ghc user guide (at the bottom of the following page): http://www.haskell.org/ghc/docs/6.4/html/users_guide/faster.html Increasing the heap and allocation area size (with -H, -M and -A) helped improved my runtimes considerably (by preventing all of my execution time from being sucked up by the garbage collector), but I have some questions: 1. What "goes wrong" with mutable data and generational garbage collection to make this sort of workaround necessary? The issue surprised me because I thought there was plenty of satisfied experience with generational gc for languages with many more destructive updates (e.g. Java) than Haskell. Is there some optimization that the ghc runtime is not implementing? 2. If there is a known fix for this issue, what would it involve (and, if there are any guesses, how much work might it be)? 3. What is the best workaround for this issue if you *don't* know the maximum amount of memory available for your program? Would be it be best to fall back copying collection if you want your program to consume and release memory "as needed" or is there a cleverer trick? Thanks, - Ravi