
Hi
First bad thing: Stack size (memory consumed) doubles each time it overflows.
Bad thing? Assume that allocating memory takes some constant amount of time, such as invoking overflow behaviour etc. To get the size of the stack to n bytes with doubling takes O(log n), to get it there with a constant increase takes O(n). If you store the stack in a linear block, then allocation costs O(n) and you can even risk O(n^2) behaviour unless you double each time. I think its fairly well understood that things like hash tables should double in size when they overflow, rather than increasing by some small increment. Also remember that this behaviour never wastes more than 50% of the stack, which is a relatively small amount.
Third bad thing (the really bad thing): If a stack has temporarily grown (to 64M say), it will never shrink back down again to something more typical (< 4K say). If I understand correctly, it will continue to take 64M from the heap regardless.
That would be nice. But its only beneficial if there are programs which takes large amounts of stack at some point, but then shrink down to very little stack and continue for a reasonable amount of time. Console programs probably don't fit this pattern (since they tend to be batch style and exit quickly). GUI programs probably do, so perhaps stack reduction will be more important as the GUI toolkits mature and Haskell starts getting used for UI type things. That said, unless there is a real user with a real problem (rather than a theoretical concern), priority may go to other bugs. Thanks Neil