
On Tue, 9 Jul 2019, Henning Thielemann wrote:
When I read about parallel programming in Haskell it is always about manual chunking of data. Why? Most of my applications with parallelization benefit from a different computation scheme: Start a fixed number of threads (at most the number of available computing cores) and whenever a thread finishes a task it gets assigned a new one. This is how make -j, cabal install -j, ghc -j, work. I wrote my own package pooled-io which does the same for IO in Haskell and there seem to be more packages that implemented the same idea. Can I have that computation scheme for non-IO computations, too? With Parallel Strategies, monad-par etc.?
Maybe I misunderstood something and the stuff from the 'parallel' package already works the way I expected and starting 100 sparks does not mean that GHC tries to run 100 threads concurrently and chunking is only necessary when computing the single list elements in parallel is too much parallelization overhead.