
Are MPI bindings still the best way of using Haskell on Beowulf clusters? It's my feeling that the bindings stagnated, or are they just very mature?

On Wed, Mar 4, 2009 at 5:03 PM, FFT
Are MPI bindings still the best way of using Haskell on Beowulf clusters? It's my feeling that the bindings stagnated, or are they just very mature?
What's the story with distributed memory multiprocessing? Are Haskell programmers uninterested in it, or are things other than MPI used with it?

fft1976:
On Wed, Mar 4, 2009 at 5:03 PM, FFT
wrote: Are MPI bindings still the best way of using Haskell on Beowulf clusters? It's my feeling that the bindings stagnated, or are they just very mature?
What's the story with distributed memory multiprocessing? Are Haskell programmers uninterested in it, or are things other than MPI used with it?
http://haskell.org/haskellwiki/Applications_and_libraries/Concurrency_and_pa...

On Fri, Mar 6, 2009 at 9:30 AM, Don Stewart
http://haskell.org/haskellwiki/Applications_and_libraries/Concurrency_and_pa...
These are all Haskell-derived languages, not libraries, right?

On Thu, Mar 5, 2009 at 10:43 AM, FFT
Are MPI bindings still the best way of using Haskell on Beowulf clusters? It's my feeling that the bindings stagnated, or are they just very mature?
MPI itself hasn't changed in 14 years, so it's not exactly a moving target. (There's an MPI 2.0, but its most visible changes are not really usable.) What's the story with distributed memory multiprocessing? Are Haskell
programmers uninterested in it, or are things other than MPI used with it?
The ratio of work to payoff is unfortunately very high, so it seems to have been abandoned as a topic of fruitful research.

2009/3/6 Bryan O'Sullivan
On Thu, Mar 5, 2009 at 10:43 AM, FFT
wrote: Are MPI bindings still the best way of using Haskell on Beowulf clusters? It's my feeling that the bindings stagnated, or are they just very mature?
MPI itself hasn't changed in 14 years, so it's not exactly a moving target. (There's an MPI 2.0, but its most visible changes are not really usable.)
MPI forum meetings are ongoing now to update it once again :-) Having implemented MPI 2, I find the comment that the visible changes not being very usable to be interesting, and really more of an opinion (one that I typically share for some parts of the API, but not others).
What's the story with distributed memory multiprocessing? Are Haskell
programmers uninterested in it, or are things other than MPI used with it?
The ratio of work to payoff is unfortunately very high, so it seems to have been abandoned as a topic of fruitful research.
I think you're better off with some message passing system in almost all cases than most when it comes to distributed, concurrent, and even some kinds of parallel programs, but that's based on my real world experience implementing efficient implementations of message passing for customers for about 5 or 6 years.... so I'm a bit biased.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

bos:
On Thu, Mar 5, 2009 at 10:43 AM, FFT
wrote: > Are MPI bindings still the best way of using Haskell on Beowulf > clusters? It's my feeling that the bindings stagnated, or are they > just very mature?
MPI itself hasn't changed in 14 years, so it's not exactly a moving target. (There's an MPI 2.0, but its most visible changes are not really usable.)
What's the story with distributed memory multiprocessing? Are Haskell programmers uninterested in it, or are things other than MPI used with it?
The ratio of work to payoff is unfortunately very high, so it seems to have been abandoned as a topic of fruitful research.
Though note the new paper for ICPP: "In this paper, we investigate the differences and tradeoffs imposed by two parallel Haskell dialects running on multicore machines. GpH and Eden are both constructed using the highly-optimising sequential GHC compiler, and share thread scheduling, and other elements, from a common code base. The GpH implementation investigated here uses a physically-shared heap, which should be well-suited to multicore architectures. In contrast, the Eden implementation adopts an approach that has been designed for use on distributed-memory parallel machines " http://www-fp.cs.st-and.ac.uk/~kh/mainICPP09.pdf -- Don
participants (4)
-
Bryan O'Sullivan
-
David Leimbach
-
Don Stewart
-
FFT