
Hi there, Does anyone know how to trace the GPH program to see the data processed at specific processors? I'm using gum-4.06, PVM3, RedHat Linux6.2. When I execute the following QuickSort program, I want to make sure what happens with different CPUs, i.e., how the data are partitioned between processors? Is there any debugging tools to trace the data? Thanks in advance. Changming Ma PS: QuickSort module Main(main) where import System(getArgs) import Parallel forceList :: [a] -> () forceList [] = () forceList (x:xs) = x `seq` forceList xs quicksortF::[Int]->[Int] quicksortF [] = [] quicksortF [x] = [x] quicksortF (x:xs) = (forceList losort) `par` (forceList hisort) `par` losort ++ (x:hisort) where losort = quicksortF [y|y <- xs, y < x] hisort = quicksortF [y|y <- xs, y >= x] args_to_IntList :: [String] -> [Int] args_to_IntList a = if length a < 1 then error "Parallel Quick Sort: no enough args \n" else map read a main = getArgs >>= \ a -> let l = args_to_IntList a in putStr ("get " ++ (show (quicksortF l))++"\n") __________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com

[Forwarded to GpH list]
Hi,
About tracing and visualisation: You can produce per-PE activity
profiles that show what's going on on the different PEs (gr2pe).
Usually they show the load, as the number of threads in the
runnable queue on that PE. At the moment we don't have a more detailed
mechanism of showing the data distribution or the heap fragmentation.
There are indirect ways, such as producing a summary of the number
of GAs produced in the execution, but that info is a bit cryptic
and more for implementors rather than GPH programmers. For a
summary of available visualisation tools check the Gentle Intro to GPH
http://www.cee.hw.ac.uk/~dsg/gph/docs/Gentle-GPH/gph-gentle-intro.html
About controling distribution: The idea in GpH is that only partitioning
of the program into threads is explicit, but mapping to PEs and
scheduling of threads on PEs is implicit. Therefore, you don't have
constructs for direct placement. I have toyed with some constructs to
give more control on the data distribution, but none of these are
mature enough to make it into the main release. You can do some
data clustering on the language level, however, and our HLPP
paper discusses how:
http://www.cee.hw.ac.uk/~dsg/gph/papers/abstracts/hlpp01.html
Alternatively use GdH, which provides an explcit constuct for
placing a computation on a particular machine:
http://www.cee.hw.ac.uk/~dsg/gdh/
Hope that helps!
--
HW
On Sun, 16 Jun 2002 11:25:31 -0700 (PDT)
Ma Changming
Hi there,
Does anyone know how to trace the GPH program to see the data processed at specific processors?
I'm using gum-4.06, PVM3, RedHat Linux6.2. When I execute the following QuickSort program, I want to make sure what happens with different CPUs, i.e., how the data are partitioned between processors? Is there any debugging tools to trace the data?
Thanks in advance. Changming Ma
PS: QuickSort
module Main(main) where
import System(getArgs) import Parallel
forceList :: [a] -> () forceList [] = () forceList (x:xs) = x `seq` forceList xs
quicksortF::[Int]->[Int] quicksortF [] = [] quicksortF [x] = [x]
quicksortF (x:xs) = (forceList losort) `par` (forceList hisort) `par` losort ++ (x:hisort) where losort = quicksortF [y|y <- xs, y < x] hisort = quicksortF [y|y <- xs, y >= x]
args_to_IntList :: [String] -> [Int] args_to_IntList a = if length a < 1 then error "Parallel Quick Sort: no enough args \n" else map read a
main = getArgs >>= \ a -> let l = args_to_IntList a in putStr ("get " ++ (show (quicksortF l))++"\n")
__________________________________________________ Do You Yahoo!? Yahoo! - Official partner of 2002 FIFA World Cup http://fifaworldcup.yahoo.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
--
participants (2)
-
Hans-Wolfgang Loidl
-
Ma Changming