Hi there. Beeing rather new to the realm of Haskell and functional programming, I've been reading about "how is easier it is to parallelize code in a purely functional language" (am I right saying that ?).

My knowledge of parallelization is also very weak, but I've been thinking about this and I have a question. Let's say we take a small piece of code, like the quicksort algorithm.

> qsort [] = []
> qsort (x:xs) = qsort lesser ++ [x] ++ qsort greater
>         where lesser = filter (<x) xs
                       greater = filter (>=x) xs

(actually I didn't test this code, this is just for the example)

Here, computing "lesser" and "greater" are two disjoint actions - we don't need the result from one to compute the other, and haskell does not allow one to alter data so that would change the behaviour of the other. So I guess those could be run in parallel.

Would it be fairly simple to automatically determine parts of code that can be run in parallel, and parts that cannot (at least in non-monadic code) ?

So the final question is : if it is possible to automatically define segments of code that can be run in parallel, is [insert compiler name here] compiling this code as a one thread thing, or as a multithreaded version ?

I guess on single processor machines, it is more efficient to do it as a single thread - but with those "many-cores" processors that are coming, it probably won't be true anymore.

Sorry if this post is blowing off open doors (I guess this doesn't mean anything in english but it sounds cool) ...

Regards

Thomas Girod