profiling for stack usage

Hi again, I'm currently in a situation where my program runs easily out of stack. Depending on the input, stack usage often exceeds 10Mb. 1. Is there a way to profile stack usage, so that I can identify the culprit and deal with the problem at the root? Normal (time) profiling tells me how many times a function is called, but it would be interesting to know how many times it was recursively called, or the size of its stack frames. Is that information available? Heap profiling -- well, it doesn't *sound* as if it would incorporate the stack; does it anyway? 2. Is there a way to compile programs to use more than the default stack? I can of course pass +RTS -K10M -RTS on the command line, but I would rather like to change the default, instead of kicking myself for forgetting it all the time. And is there any reason (except excessive resource consumption and postponed failure from infinite loops) not to run with huge stacks? -kzm -- If I haven't seen further, it is by standing in the footprints of giants

I'm currently in a situation where my program runs easily out of stack. Depending on the input, stack usage often exceeds 10Mb.
1. Is there a way to profile stack usage, so that I can identify the culprit and deal with the problem at the root? Normal (time) profiling tells me how many times a function is called, but it would be interesting to know how many times it was recursively called, or the size of its stack frames. Is that information available?
Heap profiling -- well, it doesn't *sound* as if it would incorporate the stack; does it anyway?
See: http://www.haskell.org/ghc/docs/latest/html/users_guide/prof-heap.html#RTS-O... in particular, the -xt flag.
2. Is there a way to compile programs to use more than the default stack? I can of course pass +RTS -K10M -RTS on the command line, but I would rather like to change the default, instead of kicking myself for forgetting it all the time. And is there any reason (except excessive resource consumption and postponed failure from infinite loops) not to run with huge stacks?
See: http://www.haskell.org/ghc/docs/latest/html/users_guide/runtime-control.html... Cheers, Simon "the only reason I write docs is so I can say RTFM" Marlow

"Simon Marlow"
1. Is there a way to profile stack usage, so that I can identify the culprit and deal with the problem at the root?
http://www.haskell.org/ghc/docs/latest/html/users_guide/prof-heap.html#RTS-O...
in particular, the -xt flag.
Hmm, sorry for being so dense, but I'm having a tough time with this. It seems profiling by itself tends to blow the stack -- is that correct/normal behavior? (I'm also using -O2, if that matters) Is there any way to know what kind of data resides on the stack, or what function generated it? The TSO tells med its size, but isn't really helpful beyond that (or am I missing something?) -kzm -- If I haven't seen further, it is by standing in the footprints of giants

I'm currently in a situation where my program runs easily out of stack. Depending on the input, stack usage often exceeds 10Mb.
I have better than 75% success locating the source of these bugs with the following command: grep '+1' *.hs *.lhs Reason: Lazy arithmetic can easily cause you to build thunks that look like this: 1+1+1+1+ .... + 1 + 0 which take O(n) heap to store and O(n) stack to evaluate (with constant factors of around 10-20). If you've already tried this approach, the tolols SimonM mentions are well worth using. -- Alastair Reid
participants (3)
-
Alastair Reid
-
ketil@ii.uib.no
-
Simon Marlow