
Thank you for the answer. I'll be working on another project during the summer, but I'm still interested in making interface files load faster.
The idea that I currently like the most is to make it possible to save and load objects in the "GHC heap format". That way, deserialisation could be done with a simple fread() and a fast pointer fixup pass, which would hopefully make running many 'ghc -c' processes as fast as a single 'ghc --make'. This trick is commonly employed in the games industry to speed-up load times [1]. Given that Haskell is a garbage-collected language, the implementation will be trickier than in C++ and will have to be done on the RTS level.
Is this a good idea? How hard it would be to implement this optimisation?
Another idea (that I like less) is to implement a "build server" mode for GHC. That way, instead of a single 'ghc --make' we could run several ghc build servers in parallel. However, Evan Laforge's efforts in this direction didn't bring the expected speedup. Perhaps it's possible to improve on his work.
I don't know if this would help, but I remember during Rob Pike's initial go talk he described how the 8g compiler could be so fast. I don't remember the exact details, but it was something to the effect that interface files would embed the interfaces of their depends, so this way the compiler only need read the direct imports, not the transitive dependency. Of course this might not work so well with ghc's fat interfaces with lots of inlined code, but it's a thought. One of the only impressive things that impressed me about go was the compilation speed, but that was quite impressive.