
On 2004-12-06, Robert Dockins
[1] On my Linux system, the overhead seems to be less than 2 Mbyte. 5 Mb is the figure used by the OP.
One other problem with that is that 5MB is a LOT on a device such as my Zaurus. I have an OCaml development environment on this PDA, as well as a Python one, but I think that ghc is going to be out of the question.
If we assume that Haskell programs are a) uncommon and b) the top of a solution stack (ie, not the OS, not the GUI toolkit, not the network stack) then this reasoning is sound, and we don't really need dynamic linking. IF, on the other hand, you imageine Haskell wideing its borders and moving into other nitches, the value of dynamic linking becomes apparent.
That is an excellent point. Who would use an ls or cp that requires 10MB of RAM, especially on embedded devices?
The problem, of course, is that Haskell likes to tightly bind with the libraries it uses (inlineing across modules and other optimizations). So imaging if the "package" unit was a barrier to those kinds of optimizations. Then, no knowledge of the internals of the package are needed by importing modules, and "sufficently" compatable pacakges could be drop in replacements, .so or .dll style.
I suppose I am suggesting that we consider the "package" as a unit which has a stable ABI. Is this possible/easy to do? Could we then implement dynamic linking w/ packages?
It seems that what we need is a way to control this cross-module optimization. I for one think that the performance benefit we see from that is more than offset by the inconvenience. If it were at least made an option, then a lot of other options would become available to us, too. -- John