
I like it. Let me make sure that I've understand this correctly: - While GHC doesn't need to be built with profiling if you want to use profiling in the interpeter, you will need multiple versions of the "server binary" for each way you want to implement. This should be pretty reasonable, because the server binary is a lot smaller than GHC. - It seems that GHC will ship bytecode and object code to the server binary. In this case, the interpeted code and compiled code CAN share data among each other; it is just when you want to share data with GHC that you must implement serialization. (Also, external bytecode format?!) - Many people have commented that their extensions use dynCompileExpr. I think these cases can be accommodated, by making the server binary not a standalone application, but a LIBRARY which can be linked against a custom application (e.g. IHaskell). The messages to be sent should not be the values/file descriptors, but the invocations that are being requested of GHC. Unfortunately, this does seem to imply that most things would have to be rewritten from scratch to not use the ghc-api, but use whatever this new library's interface over the message passing is. Honestly, it seems like the hard part is defining the message-passing protocol, esp. since the GHC API is as overgrown as it is today. Edward Excerpts from Simon Marlow's message of 2015-11-17 02:10:55 -0800:
Hi folks - I've been thinking about changing the way we run interpreted code so that it would be run in a separate process. It turns out this has quite a few benefits, and would let us kill some of the really awkward hacks we have in GHC to work around problems that arise because we're running interpreted code and the compiler on the same runtime.
I summarised the idea here: https://ghc.haskell.org/trac/ghc/wiki/RemoteGHCi
I'd be interested to hear if anyone has any thoughts around this, particularly if doing this would make your life difficult in some way. Are people relying on dynCompileExpr for anything?
Cheers, Simon