detecting memory size?

Hi, Is it possible to detect memory (i.e. RAM, not virtual memory) size from inside a Haskell program (so that I can keep my program from growing too large with consequent thrashing)? And if so, to what degree of portability? -kzm -- If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde wrote:
Is it possible to detect memory (i.e. RAM, not virtual memory) size
What do you mean by "memory size"? How much RAM is installed in the system? The amount which the process is currently using? The amount which the OS might be willing to allocate to your process at any given point in time? Something else?
from inside a Haskell program (so that I can keep my program from growing too large with consequent thrashing)? And if so, to what degree of portability?
On Linux, you can read /proc/* (e.g. /proc/meminfo) with readFile, but
that isn't remotely portable (nor are any of the similar mechanims
used by other systems; at least /proc isn't as inherently non-portable
as reading kernel variables via /dev/kmem).
getrusage() is relatively portable across Unix variants, but I don't
see a binding in (the 5.04 version of) PosixProcEnv.lhs (which is
where I'd expect it to live).
As for the amount which the OS might be willing to allocate to your
process at any given point in time, that information probably isn't
available by any means.
--
Glynn Clements

Glynn Clements
What do you mean by "memory size"? How much RAM is installed in the system? The amount which the process is currently using? The amount which the OS might be willing to allocate to your process at any given point in time? Something else?
My aplogies for being unclear! What I really want is the amount of memory my application can allocate and excercise lively without causing thrashing. On my Linux computer, that amounts more or less to the installed, physical RAM, minus a bit, so I'll settle for that. :-) (After browsing various information, it seems I'm after the minimum of physical RAM and getrlimit(RLIMIT_DATA)).
On Linux, you can read /proc/* (e.g. /proc/meminfo) with readFile,
getrusage() is relatively portable across Unix variants
Perhaps the best approach would be to use getrusage(), and try to decrease memory use if/when there is a lot of page faults happening? BTW, I've started looking into this after a similar problem was mentioned on the darcs list[0]. -kzm [0] http://www.abridgegame.org/pipermail/darcs-users/2004/001022.html -- If I haven't seen further, it is by standing in the footprints of giants

Ketil Malde wrote:
What do you mean by "memory size"? How much RAM is installed in the system? The amount which the process is currently using? The amount which the OS might be willing to allocate to your process at any given point in time? Something else?
My aplogies for being unclear! What I really want is the amount of memory my application can allocate and excercise lively without causing thrashing. On my Linux computer, that amounts more or less to the installed, physical RAM, minus a bit, so I'll settle for that. :-)
(After browsing various information, it seems I'm after the minimum of physical RAM and getrlimit(RLIMIT_DATA)).
IIRC, getrlimit(RLIMIT_DATA) doesn't mean much on Linux, as it doesn't
include memory which is added using mmap(..., MAP_ANON), which is used
by glibc's malloc(). Also, getrlimit(RLIMIT_RSS) is probably more
relevant for your purposes.
--
Glynn Clements

Glynn Clements
IIRC, getrlimit(RLIMIT_DATA) doesn't mean much on Linux, as it doesn't include memory which is added using mmap(..., MAP_ANON), which is used by glibc's malloc(). Also, getrlimit(RLIMIT_RSS) is probably more relevant for your purposes.
I also got a reply from David Roundy (the author of darcs); he is using sysconf(3), which apparently is fairly portable, and suffices to at least obtain the amount of physical memory. -kzm -- If I haven't seen further, it is by standing in the footprints of giants

On Thu, Jan 29, 2004 at 01:13:28PM +0100, Ketil Malde wrote:
My aplogies for being unclear! What I really want is the amount of memory my application can allocate and excercise lively without causing thrashing. On my Linux computer, that amounts more or less to the installed, physical RAM, minus a bit, so I'll settle for that. :-)
(After browsing various information, it seems I'm after the minimum of physical RAM and getrlimit(RLIMIT_DATA)).
I use the following with ghc. You don't really want to do this in haskell,
since there's no way to set the RTS values from haskell anyways.
#include

Ketil Malde wrote:
What I really want is the amount of memory my application can allocate and excercise lively without causing thrashing. On my Linux computer, that amounts more or less to the installed, physical RAM, minus a bit, so I'll settle for that. :-)
An easier way would be to make this a configuration option at installation time - the justification being that users probably have a better idea of how much RAM should be allowed to the program.
Perhaps the best approach would be to use getrusage(), and try to decrease memory use if/when there is a lot of page faults happening?
The problem is that you'll have trouble finding out the page fault rate - that's even less portable than finding out available RAM. On some Windows versions, this information is available but undocumented, and changed location and format on several occasions IIRC. You might also need administrator privileges to access that kind of information... The best you can do is to check whether your program is slowing down, and reduce memory usage if it does. That's not a very reliable measurement practice though... and if your program is getting slower when it's RAM-starved, you may even end up in a self-starvation spiral... Regards, Jo -- Currently looking for a new job.

Joachim Durchholz
What I really want is the amount of memory my application can allocate and excercise lively without causing thrashing. On my Linux computer, that amounts more or less to the installed, physical RAM, minus a bit, so I'll settle for that. :-)
An easier way would be to make this a configuration option at installation time - the justification being that users probably have a better idea of how much RAM should be allowed to the program.
Actually, there is currently a parameter to use at run-time. The problem is that it is a time/space trade-off; if this parameter is set too conservatively, the program will be unnecessarily slow, if too liberal, the program will thrash, giving you on average about 5% CPU. In this case, it's better to crash early with OOM. (And the optimal setting depends on the data -- not just data size.) So the point of this excercise is to attempt to automatically determine a reasonable default. -kzm -- If I haven't seen further, it is by standing in the footprints of giants
participants (4)
-
David Roundy
-
Glynn Clements
-
Joachim Durchholz
-
Ketil Malde