
The performance problem was due to the use of unsafePerformIO or other
thunk-locking functions. The problem was that such functions can
cause severe performance problems when using a deep stack. The
problem is that these functions need to traverse the stack to
atomically claim thunks that might be under evaluation by multiple
threads.
The latest version of GHC should no longer have this problem (or not
as severely) because the stack is now split into chunks (see [1] for
performance tuning options) only one of which needs to be scanned.
So, it might be worth a try to re-apply that thread-safety patch.
[1]: https://plus.google.com/107890464054636586545/posts/LqgXK77FgfV
On 29 August 2011 21:50, Max Bolingbroke
On 27 August 2011 09:00, Evan Laforge
wrote: Right, that's probably the one I mentioned. And I think he was trying to parallelize ghc internally, so even compiling one file could parallelize. That would be cool and all, but seems like a lot of work compared to just parallelizing at the file level, as make would do.
It was Thomas Schilling, and he wasn't trying to parallelise the compilation of a single file. He was just trying to make access to the various bits of shared state GHC uses thread safe. This mostly worked but caused an unacceptable performance penalty to single-threaded compilation.
Max
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
-- Push the envelope. Watch it bend.