
Hello, I'm still chasing down a memory leak in my server application written in Haskell using GHC 6.4.x under MinGW/MSYS. In the scenario described below, I am repeating the same server request once per second continuously. After utilizing some memory monitoring tools I've discovered that memory usage fluctuates within in a range of approximately 850 KB (I assume as garbage is collected), but at regular intervals the range gets bumped up by ~ 1 MB. So in effect I get a stair-stepping of memory usage that keeps repeating until memory runs out. WinDbg has revealed that this stepping coincides with the GHC runtime function getMBlocks(). My question is: what conditions affect when the runtime determines that it needs more memory? Is it a pure "no more room" trigger or is there some sort of algorithm behind it? I assume that this behavior means I still have a space leak somewhere in my Haskell code, though none of leak check tools I utilize indicate such. Thanks, Rich