Re: Contents of Glasgow-haskell-users Digest, Vol 17, Issue 8

Hi Bill,
In order to force the *complete* evaluation of your result, you
could use Evaluation Strategies. Strategies are a concept
introduced for increasing parallelism in Glasgow parallel Haskell.
Parallelism and lazy evaluation are in a way contrary aims, since you
want your parallel evaluation to start ASAP, even if there is no
actual demand on its result.
Check the following paper for more information:
http://www.macs.hw.ac.uk/~dsg/gph/papers/html/Strategies/strategies.html
A strategy module should be included in the libraries in GHC...
Formerly just module "Strategies" in package concurrent,
now in package base, module "Control.Parallel.Strategies"
Concretely, I suggest to apply the strategy "rnf" (reduce to normal form)
to "junk" instead of just checking the first node in the graph.
What null does is to look at the list and yields true if the top constructor is
"[]" and not "(:)".
Your version below is what you get with just "junk `seq` runNReps f x (todo-1)"
- weak head normal form evaluation.
runNReps f x todo | todo > 0 = do let junk = f x
rnf junk `seq` runNReps f x (todo-1)
HTH
Jost
glasgow-haskell-users-request@haskell.org wrote:
Message: 7 Date: Mon, 17 Jan 2005 14:21:00 -0600
From: "jekwtw"
runNReps :: (Int -> [a]) -> Int -> Int -> IO () runNReps f x todo | todo > 0 = do let junk = (f x) if null junk then return (()) else runNReps f x (todo - 1) | otherwise = return (())
Ideas? Again, many thanks, -- Bill Wood bill.wood@acm.org

Jost Berthold wrote:
In order to force the *complete* evaluation of your result, you could use Evaluation Strategies. Strategies are a concept introduced for increasing parallelism in Glasgow parallel Haskell. Parallelism and lazy evaluation are in a way contrary aims, since you want your parallel evaluation to start ASAP, even if there is no actual demand on its result.
I think this is not quite right... surely you don't want to run the function (even in parallel) if there is _no_ demand on its result. The compiler will know at compile time whether the result is required for a lot of cases... the only ones that cause problems are where program flow depends on IO actions. In which case you implement speculative execution (like a modern CPU does) - if you don't know whether a function result will be required and you have a free execution unit, you run the function. If at a later time it becomes apparent the results of some running funtions will not be required, you kill them, throw away the data, and free the execution unit to do something more useful. Keean.

Hi Keean, Keean Schupke wrote:
Jost Berthold wrote:
In order to force the *complete* evaluation of your result, you could use Evaluation Strategies. Strategies are a concept introduced for increasing parallelism in Glasgow parallel Haskell. Parallelism and lazy evaluation are in a way contrary aims, since you want your parallel evaluation to start ASAP, even if there is no actual demand on its result.
I think this is not quite right... surely you don't want to run the function (even in parallel) if there is _no_ demand on its result.
In the given context, I just thought I could point the readers to a nice solution for a concrete problem (not related to parallelism at all) with the cited paper. Just to explain a bit further what I wanted to summarize in one sentence in the first mail (thoughts behind the word "actual" above): You are right: Of course, a sub-result which is not needed at all ("no demand") should not be evaluated. However, there are cases where a programmer wants to point at some sub-result to say "Hey, you will need this in a minute"/"Please compute this sub-result on a different machine [if any is idle]" (semi-explicit or explicit parallelism), which is the state-of-the-art of today's parallel Haskell extensions. Simplistic classical example: naive parallel map, one process per element. If the main function needs the list, it will create demand for *one element after the other*, which leads to purely sequential execution. let neededList = map f input -- computation, looks sequential `using` parList rnf -- strategy: completely evaluate elements in parallel in mungeForResult neededList Without the strategy, the list would surely be processed element by element, since this is the way you access lists in Haskell. Strategies are a way to encode "evaluate this in parallel, to this degree" without interfering with the computation you specify. The outcome of the cited paper is exactly this point.
The compiler will know at compile time whether the result is required for a lot of cases... the only ones that cause problems are where program flow depends on IO actions. In which case you implement speculative execution (like a modern CPU does) - if you don't know whether a function result will be required and you have a free execution unit, you run the function. If at a later time it becomes apparent the results of some running funtions will not be required, you kill them, throw away the data, and free the execution unit to do something more useful.
Yes: the compiler could do a strictness analysis and hopefully (safe analysis) tell wether "neededList" is needed by "mungeForResult". In the case of algebraic data structures (like lists), things get a bit more complex (different degrees of strictness); Special data-parallel language concepts weave an automatism into the data structures they aim at. But apparently, the programmer should know very well if this is the case, and she may explicitly opt for speculative evaluation, or not. Explicit or "implemented" (which means in a way automatic): Garbage collection in a parallel system is able to detect unneeded results and will stop the computation in this case. Cheers Jost

Jost Berthold wrote:
execution unit to do something more useful.
Yes: the compiler could do a strictness analysis and hopefully (safe analysis) tell wether "neededList" is needed by "mungeForResult". In the case of algebraic data structures (like lists), things get a bit more complex (different degrees of strictness); Special data-parallel language concepts weave an automatism into the data structures they aim at. But apparently, the programmer should know very well if this is the case, and she may explicitly opt for speculative evaluation, or not. Explicit or "implemented" (which means in a way automatic): Garbage collection in a parallel system is able to detect unneeded results and will stop the computation in this case.
I wait for the day the compiler does it all for us... One of the reasons for adopting Haskell is the way functional languages parallel hardware implementations (in VHDL recursion = registers ... and parameters = wires). With an imperative language all the side effects get in the way (side-effects = memory access over a bus). It seems to me the compiler should sort out all the possible parallelisations, and static dependancies, the RTS should deal with dynamic-dependancies and speculative execution... It would be nice if the compiler would also calculate a cost metric for funtions, so that the RTS could make intelligent architecture dependant decisions on whether to run a dependancy sequentially on the current CPU, or in parallel on another. But of course to me the beauty is that not one like of source code should need to be modified... Keean.
participants (2)
-
Jost Berthold
-
Keean Schupke