
Hello Ketil, Wednesday, March 22, 2006, 11:02:49 AM, you wrote: of course, any complicated algorithm can be a result of long researches and so we are far from such algorithms. my words is more a rhetorical point than concrete suggestion. that's real on this moment: 1) tune the ghc algorithm to kill "worst cases", especially disk swapping 2) make the ghc's internal GC tuning features more accessible (it's really more a question of documenting than adding/changing APIs) so that anyone who need a specific GC tuning algorithm for his own program can implement it himself 3) document the GC and GC tuning details
and moreover, we should perform compaction immediately when swapping grows. imagine for example algorithm that had 80 mb residency and runs on a machine with 100 mb free. performing compacting GC each time when memory usage grows to 100 mb should be better strategy than waiting for 160 mb usage. of course, concrete calculation whether to start GC or not, is a delicate task, but at least it should be performed on each minor GC
I'd like to urge you to have a go at implementing some of these strategies.
KM> I must admit that I am slightly skeptical to complicated heuristics KM> here, since it seems actual performance would depend a lot on memory KM> manangement policies of the underlying OS. KM> However, I find that setting GHC to use the equivalent of +RTS -Mxxx KM> where xxx is about 80% of physical RAM generally works well. KM> Currently, I do this through a C file reading values from sysconf(), KM> which is a bit cumbersome. A way to set these defaults though the ghc KM> command line would be welcome. KM> -k -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com