
On May 3, 2010 08:04:14 Johan Tibell wrote:
On Mon, May 3, 2010 at 11:12 AM, Simon Peyton-Jones
wrote: In truth, nested data parallelism has taken longer than we'd hoped to be ready for abuse :-). We have not lost enthusiasm though -- Manual, Roman, Gabi, Ben, and I talk on the phone each week about it. I think we'll have something usable by the end of the summer.
That's very encouraging! I think people (me included) have gotten the impression that the project ran into problems so challenging that it stalled. Perhaps a small status update once in a while would give people a better idea of what's going on. :)
Most will likely see this on Planet Haskell, but it seem worthwhile mentioning as there seems to be quite a bit of interest in the technology. Manuel just finished posting quite a nice talk given by Simon PJ recently in Boston http://pls.posterous.com/simon-peyton-jones-on-data-parallel-haskell Related to some of the questions asked at the talk, I would be curious to hear any comments regarding adding support for processor level SIMD vectorization (e.g., the SSE{1,2,3} instructions on the x86) in conjunction with NDPH. It would seem conceptually as simple as having "ideal" vector primitive operations and coding the basic operations (e.g., sumS) in terms of those. This would then presumably have a very positive impact on the the existing Vector, Repa, etc, libraries as well, and would seem quite a bit easier than trying to recognize opportunities for these instructions via loop analysis/whatever and insert them after the fact? Or is the recognition option felt to be "better" (i.e., in the sense it would also be applicable in other situations as well) and easily done with the big data-flow hammer? I see there is a track ticket regarding SIMD instructions http://hackage.haskell.org/trac/ghc/ticket/3557 so I'm guessing at least part/all of the answer is just the time to do the grunt work to add in the support to CMM and the native code generators. Cheers! -Tyson