
Thank you Murray. My post was not so clear .... I was referring to
"automatic" parellelization vs "manual" parallelization. By "automatic" I
mean the programmer doesn't have to indicate where to parallelize ...
instead the compiler decides how to parallize!
Vasili
On Sat, Aug 23, 2008 at 12:58 AM, Murray Gross
Vasili:
Each "par" "sparks" a new thread, which is then queued for execution. At appropriate points, the threads are distributed to available (free) processors (cores). The result is that parallelization scales automatically with the number of available processors. Take a look at the GPH site for papers that will provide more information on how parallel (and distributed) Haskell does things.
Best,
Murray Gross, Brooklyn College
On Fri, 22 Aug 2008, Galchin, Vasili wrote:
Hello,
With pure side of the Haskell house, there is hope that the generated code could automagically scale as more cores are added yes? It seems that it is on the stateful monadic side of the house in an appplication that it is the programmer responsibility to design the software so that it scales across increasing cores? (I am assuming that things like "par" construct are monadic). On Monday, I am starting a several month project with a company. Alledgely some of the code will be written in Python. I would like engage the manager in a discussion about multi-core "enabling" the code now when we design and implement not later as an afterthought. Seems like a gnarly subject given current "state-of-the-art" software tools. Ideas?!
Regards, Vasili