
Wait, wait, wait! I wasn't talking about a parallel *runtime*. Nothing changes there. All I'm talking about is something that is a very old issue that never got added / solved / resolved. Somewhere on the commentary, or the mailing list, I seem to recall that the generation of Uniques was the bottleneck for the parallelisation of GHC *Itself*. It's about having a compiler using multiple threads and says nothing about programs coming out of it.
I'm all with you on embedded processors and that kind of stuff, but I don't see a pressing need to compile *on* them. Isn't all ARM-stuff assuming cross-compilation?
Ph.
________________________________________
From: mad.one@gmail.com
Yes, this approach to a parallel GHC would only work on 64-bit machines. The idea is, I guess, that we're not going to see a massive demand for parallel GHC running on multi-core 32-bit systems. In other words; 32-bit systems wouldn't get a parallel GHC.
Let me make sure I'm understanding this correctly: in this particular proposed solution, the side effect would be that we no longer have a capable 32bit runtime which supports multicore parallelism? Sorry, but I'm afraid this approach is pretty much unacceptable IMO, for precisely the reason outlined in your last sentence. 32bit systems are surprisingly commen. I have several multicore 32bit ARMv7 machines on my desk right now, for example. And there are a lot more of those floating around than you might think. If that's the 'cure', I think I (and other users) would consider it far worse than the disease.
Regards, Philip
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://www.haskell.org/mailman/listinfo/ghc-devs
-- Regards, Austin Seipp, Haskell Consultant Well-Typed LLP, http://www.well-typed.com/