
Isn't it the kind of things Data Parallel Haskell is achieving ? I'm in no way an expert of the field, but from what I've read on the subject it looked like : I have a list of N elements and I want to map the function F on it. technically, I could spawn N processes and build the result from that, but it would be highly inefficient. So the really hard part is to guess how I should split my data to get the best performances. Well, I guess it's pretty easy for a flat structure if you have access to it's length, but for a recursive one it is complicated as you don't know if a branch of the tree will lead to a leaf or a huge subtree ... the "evil detail" ! Tom On Thu, Feb 11, 2010 at 11:00:51AM +0100, Ketil Malde wrote:
Johann Höchtl
writes: In a presentation of Guy Steele for ICFP 2009 in Edinburgh: http://www.vimeo.com/6624203 he "considers foldl and foldr" harmful as they hinder parallelism because of "Process first element, then the rest" Instead he proposes a divide and merge aproach, especially in the light of going parallel.
In Haskell foldl/foldr apply to linked lists (or lazy streams, if you prefer) which are already inherently sequential, and gets a rather harsh treatment. I notice he points to finger trees, which I though was implemented in Data.Sequence.
are somewhat geared towards Fortress, but I wonder what Haskellers have to say about his position.
Can we (easily) parallelize operations on Data.Sequence? Often, the devil is in the details, and there's a lot of ground to cover between 'trees are easier to parallelize' to an efficient and effective high level interface. (E.g. non-strict semantics allowing speculative evalutaion - you still need to insert manual `par`s, right?)
-k -- If I haven't seen further, it is by standing in the footprints of giants _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe