
When reading http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html I got the impression, that when I want to compute in parallel I have to suppress laziness at all costs, otherwise only a neglible portion of the code is run in parallel. How can I parallelize the computation of two lazily generated lists, where the list generation is expensive, but the combination of the lists is cheap? For example: module Main where expensiveList :: Int -> [Int] expensiveList n = map (\m -> sum [n..m]) [10000000..] sequentialZip :: [Int] sequentialZip = zipWith (+) (expensiveList 1) (expensiveList 2) main :: IO () main = mapM_ print $ take 10 sequentialZip It seems to me that this program must run almost twice as fast when using two cores, because the expensive lists can be computed perfectly in parallel. It requires however, that the zipWith can fetch data lazily across threads. However applying a Parallel strategy to the expensive list, will certainly try to evaluate it completely.