
On 8 June 2018 at 19:18, Evan Laforge
On Fri, Jun 8, 2018 at 12:29 AM, Simon Marlow
wrote: heap profiler for a while. However, I imagine at some point loading everything into GHCi will become unsustainable and we'll have to explore other strategies. There are a couple of options here: - pre-compile modules so that GHCi is loading the .o instead of interpreted code
This is what I do, which is why I was complaining about GHC tending to break it. But when it's working, it works well, I load 500+ modules in under a second.
- move some of the code into pre-compiled packages, as you mentioned
I was wondering about the tradeoffs between these two approaches, compiled modules vs. packages. Compiled modules have the advantage that you can reload without restarting ghci and relinking a large library, but no one seems to notice when they break. Whereas if ghc broke package loading it would get noticed right away. Could they be unified so that, say, -package xyz is equivalent to adding the package root (with all the .hi and .o files) to the -i list? I guess the low level loading mechanism of loading a .so vs. a bunch of individual .o files is different.
I'm slightly surprised that it keeps breaking for you, given that this is a core feature of GHCi and we have multiple tests for it. You'll need to remind me - what were the bugs specifically? Maybe we need more tests. There really are fundamental differences in how the compiler treats these two methods though, and I don't see an easy way to reconcile them. Loading object files happens as part of the compilation manager that manages the compilations for all the modules in the current package, whereas packages are assumed to be pre-compiled and are linked on-demand after all the compilation is done. Cheers Simon