
I chose this example specifically because parsing/compiling is not IO- bound. Many build systems today achieve multi-core scaling by parallelizing all the phases: parsing, semantic analysis, and compilation. Your question is a good one and one we face already in auto- parallelization of purely functional code: how do you know when the cost of doing something in another thread is overwhelmed by the benefit? I think JIT compilation provides the ultimate answer to these types of questions, because you can make guesses, and if you get them wrong, simply try again. Regards, John A. De Goes N-Brain, Inc. The Evolution of Collaboration http://www.n-brain.net | 877-376-2724 x 101 On Aug 16, 2009, at 2:46 AM, Marcin Kosiba wrote:
Hi, IMHO, provided with a flexible effect system, the decision on how to do read/write operations on files is a matter of libraries. But I'd like to ask another question: is the effects system you're discussing now really capable of providing significant performance improvements in case of file I/O? Even if we assume some consistency model and transform one correct program to another one -- how do you estimate efficiency without knowledge of physical media characteristics? I kinda see how this could be used to optimize different kinds of media access (reordering socket/ file operations or even running some of those in parallel), but I don't see how can we benefit from reordering writes to the same media. Another thing is that not every kind of r/w operation requires the same consistency model -- like when I'm writing a backup for later use I only care about my writes being in the same order. I imagine that such an effect system could help write software for CC-NUMA architectures or shared-memory distributed systems. -- Thanks! Marcin Kosiba
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe