Hi Herbert,
It sounds like you're interested in running just one client computation at once? Hence you don't have a disambiguation problem -- if the total memory footprint crosses a threshold you know who to blame.
At least this seems easier than needing a per-computation or per-IO-thread caps. By the way, the folks who implement Second Life did an interesting job of that -- they hacked Mono to be able to execute untrusted code with resource bounds.
Cheers,
-Ryan
On Thu, Apr 19, 2012 at 6:45 AM, Herbert Valerio Riedel
<hvr@gnu.org> wrote:
Hello GHC Devs,
One issue that's been bothering me when writing Haskell programs meant
to be long-running processes performing computations on external
input-data in terms of an event/request-loop (think web-services,
SQL-servers, or REPLs), that it is desirable to be able to limit
resource-usage and be able to "contain" the effects of computations
which exhausts the resource-limits (i.e. w/o crashing and burning the
whole process)
For the time-dimension, I'm already using functions such as
System.Timeout.timeout which I can use to make sure that even a (forced)
pure computation doesn't require (significantly) more wall-clock time
than I expect it to.
But I'm missing a similar facility for constraining the
space-dimension. In some other languages such as C, I have (more or
less) the ability to check for /local/ out-of-memory conditions (e.g. by
checking the return value of e.g. malloc(3) for heap-allocations, or by
handling an OOM exception), rollback the computation, and be able to
skip to the next computation request (which hopefully requires less
memory...)
So, is there already any such facility provided by the GHC Platform I've
missed so far?
...and if not, would such a memory-limiting facility be reconcilable
with the GHC RTS architecture?
Cheers,
hvr
--
_______________________________________________
Glasgow-haskell-users mailing list
Glasgow-haskell-users@haskell.org
http://www.haskell.org/mailman/listinfo/glasgow-haskell-users