
Recently the "go" language was announced at golang.org. There's not a lot in there to make a haskeller envious, except one real big one: compilation speed. The go compiler is wonderfully speedy. Anyone have any tips for speeding up ghc? Not using -O2 helps a lot. I spend a lot of time linking (ghc api drags in huge amounts of ghc code) and I'm hoping the new dynamic linking stuff will speed that up. I suppose it should be possible to run the whole thing under the bytecode compiler, and this works fine for rerunning tests, I can just stay in ghci, make changes, :r and rerun, but it runs into trouble as soon as code wants to link in foreign C. I also recently discovered -fobject-code, which indeed starts compiling right away, cutting out the ghc startup overhead. However, it doesn't appear to help with the final link, so I wind up having to reinvoke ghc anyway. According to Rob Pike, the main reason for 6g's speed is that in a dependency tree where A depends on B depends on C, the interface for B will pull up all the info needed from C. So compiling A only needs to look at B. Would it help ghc at all if it did the same with hi files? I've heard that ghc does more cross module inlining than your typical imperative language, but with optimization off maybe we can ignore all that? I've seen various bits of noise about supporting parallel builds with --make and it seems to involve making the whole compiler re-entrant which is non-trivial. Would it be simpler to parallelize pure portions of the compilation, say with parallel strategies? Or just start one ghc per core and have a locking scheme so they don't step on each others files?