
On Fri, Jun 3, 2011 at 03:53, Ketil Malde
dm-list-haskell-cafe@scs.stanford.edu writes:
leaking file descriptors
...until they are garbage collected. I tend to consider the OS fd limitation an OS design error - I've no idea why there should be some arbitrary limit on open files, as long as there is plenty of memory around to store them. But, well, yes, it is a real concern.
In the case of Unix, available memory was indeed the motivating factor. The DEC minicomputers it was developed on didn't have a whole lot of memory, plus older Unix reallocated the per-process file structures as part of the (struct proc) for speed (again, old slow systems). The modern reason for limits is mostly to avoid runaway processes. Usually the hard limit is set pretty high but the soft limit is lower.