
perhaps if -M is not otherwise set, 'getrlimit(RLIMIT_AS,..)' could be called and the maximum heap size set to just under that, since it is the point that the OS will forcefully kill the program anyway. John -- John Meacham - ⑆repetae.net⑆john⑈

John Meacham wrote:
perhaps if -M is not otherwise set, 'getrlimit(RLIMIT_AS,..)' could be called and the maximum heap size set to just under that, since it is the point that the OS will forcefully kill the program anyway.
Good idea, I've created a task ticket for it. Cheers, Simon

Simon Marlow
John Meacham wrote:
perhaps if -M is not otherwise set, 'getrlimit(RLIMIT_AS,..)' could be called and the maximum heap size set to just under that
Of course, it is commonly set to 'unlimited' anyway. Perhaps I should limit it; OTOH, the value must be less than 2Gb (signed int), which will soon be on the small side for a modern workstation. For my programs, I've found that setting -M to 80% of physical tends to work well. Beyond that, I get thrashing and lousy performance. (Perhaps programs mmap'ing large files etc can work well beyond physical memory? I'd be interested to hear others' experiences.) Quite often, I find the program will run equally well with smaller heap (presumably GC'ing harder?). I think it would be a good default to at least try as hard as possible to keep heap smaller than physical RAM. (Caveat: I'm on a Linux system which doesn't work wery well with heap sizes at the moment, so my observations may not apply.) -k -- If I haven't seen further, it is by standing in the footprints of giants

On Wed, Apr 19, 2006 at 01:23:56PM +0200, Ketil Malde wrote:
perhaps if -M is not otherwise set, 'getrlimit(RLIMIT_AS,..)' could be called and the maximum heap size set to just under that
Of course, it is commonly set to 'unlimited' anyway. Perhaps I should limit it; OTOH, the value must be less than 2Gb (signed int), which will soon be on the small side for a modern workstation.
on 64 bit systems (where long is 64 bits) it is 64 bits. many 32 bit systems also provide a 'getrlimit64' as part of their long file support which is also 64 bits so I don't forsee this being a big issue in practice.
For my programs, I've found that setting -M to 80% of physical tends to work well. Beyond that, I get thrashing and lousy performance. (Perhaps programs mmap'ing large files etc can work well beyond physical memory? I'd be interested to hear others' experiences.)
yeah, that is what I do, I actually set my ulimit to be about the same to keep any individual thing from getting big enough to swap out my X server and make life unhappy. for the rare app that actually needs > 2G of ram (ahem... jhc) I can temporarily raise the limit. it has made things much nicer when I accidentally write a memory bomb in any language. John -- John Meacham - ⑆repetae.net⑆john⑈
participants (3)
-
John Meacham
-
Ketil Malde
-
Simon Marlow