
From GHC documentation: "Once profiling has thrown the spotlight on the guilty time-consumer(s), it may be better to re-think your program than to try all the tweaks listed below." So, how should I rethink my program? Which way to take?

szefirov:
From GHC documentation: "Once profiling has thrown the spotlight on the guilty time-consumer(s), it may be better to re-think your program than to try all the tweaks listed below."
So, how should I rethink my program? Which way to take?
Do you have some particular code that is underperforming? Performance tips are documented here: http://haskell.org/haskellwiki/Performance Happy coding! :) -- Don

Donald Bruce Stewart wrote:
szefirov:
From GHC documentation: "Once profiling has thrown the spotlight on the guilty time-consumer(s), it may be better to re-think your program than to try all the tweaks listed below."
So, how should I rethink my program? Which way to take?
Do you have some particular code that is underperforming?
I have a plenty of it. ;) I'm yet to decide what to blame.
Performance tips are documented here:
Thank you. I loaded it the next second I received your answer. ;) I profiled my program and found that residency looks pretty fixed but program memory usage grows and eventually I get heap overflow (on Windows) or heavy pagefile trashing (on Linux). When I turn on +RTS -c to use heap compaction I immediately get the following: ----------------------------- xxxx.exe: internal error: scavenge_mark_stack: unimplemented/strange closure type 30 @ 03678268 Please report this as a bug to glasgow-haskell-bugs@haskell.org, or http://www.sourceforge.net/projects/ghc/ ----------------------------- This already reported as a bug, but isn't fixed yet. The bug is right here: http://cvs.haskell.org/trac/ghc/ticket/954 It does appear with 6.4.1 too. So I try as hard as I can to reduce the size of garbage produced. No much luck so far.

szefirov:
I profiled my program and found that residency looks pretty fixed but program memory usage grows and eventually I get heap overflow (on Windows) or heavy pagefile trashing (on Linux).
When I turn on +RTS -c to use heap compaction I immediately get the following:
----------------------------- xxxx.exe: internal error: scavenge_mark_stack: unimplemented/strange closure type 30 @ 03678268 Please report this as a bug to glasgow-haskell-bugs@haskell.org, or http://www.sourceforge.net/projects/ghc/ -----------------------------
This already reported as a bug, but isn't fixed yet. The bug is right here: http://cvs.haskell.org/trac/ghc/ticket/954
It does appear with 6.4.1 too.
Can you produce this bug using ghc 6.6? If so, please submit a test case so this can be reproduced and fixed. Either annotate the existing bug (if you think its the same one), or create a new bug report: http://hackage.haskell.org/trac/ghc/newticket?type=bug
So I try as hard as I can to reduce the size of garbage produced. No much luck so far.
Ok. I doubt anyone can help here though, unless you make the code available :) Is it possible to put the code online? Is it available via darcs? What does your profiling output look like? (The .prof file). What code is doing the most allocation? -- Don

Hello szefirov, Thursday, December 14, 2006, 5:24:11 PM, you wrote:
When I turn on +RTS -c to use heap compaction I immediately get the following: ----------------------------- xxxx.exe: internal error: scavenge_mark_stack: unimplemented/strange
this bug was fixed at Nov 15 so you should just donwload up-to-date GHC
snapshot:
Wed Nov 15 05:50:20 PST 2006 Ian Lynagh

Hello szefirov, Thursday, December 14, 2006, 4:18:37 PM, you wrote:
From GHC documentation: "Once profiling has thrown the spotlight on the guilty time-consumer(s), it may be better to re-think your program than to try all the tweaks listed below."
So, how should I rethink my program? Which way to take?
i think they mean usual change-your-algorithm stuff -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
participants (3)
-
Bulat Ziganshin
-
dons@cse.unsw.edu.au
-
szefirov@ot.ru