failure implementing :next command in ghci

Hi, So I wanted to give implementing :next ghci debugger command a shot. It looked easy and I could use it. Moreover it would give me an easy way to implement dynamic stack in ghci (using similar approach as used for trace) ... well if I would feel like that since I was a bit discouraged about it. The problem is I failed miserably. I still think it is easy to do. I just do not know how to create correct arguments for rts_breakpoint_io_action and I have given up finding up myself for now. The proposed meaning for :next Lets mark dynamic stack size at a breakpoint (at which we issue :next) as breakStackSize and its selected expression as breakSpan. Then :next would single step till any of these is true: 1) current dynamic stack size is smaller than breakStackSize 2) current dynamic stack size is equal to breakStackSize and the current selected expression is not a subset of breakSpan I hope the above would make good sense but I do not really know since maybe rts does some funny things with stack sometimes. If you think the proposed behavior is garbage let me know why so that I do not waste more time with this :) Ok, lets get back to why I failed. I think anybody who knows rts well could probably tell me what's wrong in few minutes. The patch representing my attempt is attached. It is done against the latest ghc (head branch). I want to add stack size as the last argument of rts_breakpoint_io_action so its signature would change from: Bool -> BreakInfo -> HValue -> IO () to: Bool -> BreakInfo -> HValue -> Int -> IO () Since dynamic stack is continuous I can find out stack size easily. I did not implemented this yet, as well I did not implement this at all for exceptions. The only thing I cared for now is passing one more integer to rts_breakpoint_io_action. The argument contains only zero now but that should be enough to see if I can add one more argument. I tested it by loading this source code to ghci: f :: Int -> Int f x = x + 1 a = f 1 ... then I used ":break f" and ":force a" in ghci to see whether I can pass the new argument correctly. This test works since I added printing of the last argument (the wanna be stack size) in noBreakAction :: Bool -> BreakInfo -> HValue -> Int -> IO () noBreakAction False _ _ x = putStrLn $ "*** Ignoring breakpoint " ++ show x noBreakAction True _ _ _ = return () -- exception: just continue The noBreakAction implementation is just a test for now. Unfortunately when I force the last argument it crashes. I think it is because I do not create the closure for it correctly in the code for bci_BRK_FUN in rts/Interpreter.c. Can somebody tell me what is wrong there or where to find more information about how to fill in the stack with rts_breakpoint_io_action arguments correctly? Also, any information somewhere about where to use allocate and where to use allocateLocal? I was surprised a bit that interpretBCO uses allocate much but no allocateLocal which is supposed to be quicker for a single thread. I skimmed all of ghc commentary and read the pages which looked related carefully but either it is not there or I missed it :-( Thanks, Peter.

Hi, Pepe Iborra pointed out that my patch is not in the right format for gnu patch command. Sorry for inconvenience (I used "darcs what -u" instead of "darcs diff -u"). Here it is attached in the correct format. Thanks, Peter.

Hi, Maybe the code adding one Int argument to rts_breakpoint_io_action is correct in general since when Pepe Iborra applied the patch to his ghc trunk the test did not crash on his machine. Regardless on my machine the test does not work even with the stock ghc 6.10.2 sources (so even when I do not have the patch applied). Here is how I did the test on my 32 bit archlinux: * downloaded ghc-6.10.2-src.tar.bz2 from http://haskell.org/ghc/download_ghc_6_10_2.html#sources (I did not download the extralibs tarball) * unpacked ghc-6.10.2-src.tar.bz2 and did this in the ghc-6.10.2 directory: ./boot ./configure make * then I did this test: status:0 peter@metod [892] ~/haskell/ghc-6.10.2 % cat a.hs f :: Int -> Int f x = x + 1 a = f 1 status:0 peter@metod [893] ~/haskell/ghc-6.10.2 % ghc/stage2-inplace/ghc --interactive a.hs GHCi, version 6.10.2: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer ... linking ... done. Loading package base ... linking ... done. [1 of 1] Compiling Main ( a.hs, interpreted ) Ok, modules loaded: Main. *Main> :break f Breakpoint 0 activated at a.hs:2:0-10 *Main> :force a zsh: segmentation fault ghc/stage2-inplace/ghc --interactive a.hs status:139 peter@metod [894] ~/haskell/ghc-6.10.2 % The test works ok when I do it with the ghc-custom I have installed (6.10.1 with few of my patches). The same behavior is on my laptop which has only stock uptodate archlinux, and stock ghc 6.10.1 installed (so I think it cannot be because of my few patches in ghc 6.10.1). I did the clean build and the test there too. Well to be precise, it works worse on my laptop since when I try to run ghci 6.10.1 (as distributed by archlinux) I'll get a crash too: status:0 peter@dwarf [852] ~/haskell/ghc-6.10.2 % ghci a.hs GHCi, version 6.10.1: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer ... linking ... done. Loading package base ... linking ... done. [1 of 1] Compiling Main ( a.hs, interpreted ) Ok, modules loaded: Main. *Main> :break f Breakpoint 0 activated at a.hs:2:0-10 *Main> :force a% status:139 peter@dwarf [853] ~/haskell/ghc-6.10.2 % The question is: Is the test supposed to work with ghc 6.10.2 without installing it? I hope I'm doing some stupid mistake and that archlinux is not borked. What is the platform (distribution and it's version) ghc HQ uses for ghc development (on which the test I presented works)? Thanks, Peter.

Peter Hercek wrote:
Hi,
So I wanted to give implementing :next ghci debugger command a shot. It looked easy and I could use it. Moreover it would give me an easy way to implement dynamic stack in ghci (using similar approach as used for trace) ... well if I would feel like that since I was a bit discouraged about it. The problem is I failed miserably. I still think it is easy to do. I just do not know how to create correct arguments for rts_breakpoint_io_action and I have given up finding up myself for now.
The proposed meaning for :next
Lets mark dynamic stack size at a breakpoint (at which we issue :next) as breakStackSize and its selected expression as breakSpan. Then :next would single step till any of these is true: 1) current dynamic stack size is smaller than breakStackSize 2) current dynamic stack size is equal to breakStackSize and the current selected expression is not a subset of breakSpan
So what happens if the stack shrinks and then grows again between two breakpoints? Presumably :next wouldn't notice. I think you'd be better off implementing this with a special stack frame, so that you can guarantee to notice when the current "context" has been exited. Still, I'm not completely sure that :next really makes sense... but I haven't thought about it in any great detail.
I hope the above would make good sense but I do not really know since maybe rts does some funny things with stack sometimes. If you think the proposed behavior is garbage let me know why so that I do not waste more time with this :)
Yes the RTS does do "funny thing" with the stack sometimes. The stack can shrink as a result of adjacent update frames being coalesced ("stack squeezing"). Using a stack frame instead of a "low water mark" would be immune to this. Cheers, Simon

Simon Marlow wrote:
Peter Hercek wrote:
The proposed meaning for :next
Lets mark dynamic stack size at a breakpoint (at which we issue :next) as breakStackSize and its selected expression as breakSpan. Then :next would single step till any of these is true: 1) current dynamic stack size is smaller than breakStackSize 2) current dynamic stack size is equal to breakStackSize and the current selected expression is not a subset of breakSpan
So what happens if the stack shrinks and then grows again between two breakpoints? Presumably :next wouldn't notice.
Yes, if there is no breakpoint in between I would not notice. I did not expect this can happen though. I thought that to add a frame on the stack this must be done within an expression (which is going to be forced) and the expression should be a breakpoint location. If it is so negligible that it does not have a breakpoint location associated then even the things it is calling should be negligible. Where is an error in this?
I think you'd be better off implementing this with a special stack frame, so that you can guarantee to notice when the current "context" has been exited.
This would be robust but I do not have knowledge to do it yet. If I understand you correctly this means that before executing a BCO which we are going to break at, we must prepare for a possible :next. So regardless whether :next is going to be issued by the user or not we would add a frame which represents a return to a function which: a) if user issued :next it enables all breakpoints so that we stop at the next breakpoint b) if user did not issue a break it would not do anything (just return) We could decide not to insert the frame when we are only tracing. But if I would want to track a simulated dynamic stack I would need to insert this stack frame at the start of each breakpoint (when dynamic stack tracing would be on). Does not sound that good.
I hope the above would make good sense but I do not really know since maybe rts does some funny things with stack sometimes. If you think the proposed behavior is garbage let me know why so that I do not waste more time with this :)
Yes the RTS does do "funny thing" with the stack sometimes. The stack can shrink as a result of adjacent update frames being coalesced ("stack squeezing").
OK, so it looks like either switching off the squeezing (if shrinking and growing stack between breakpoints/ticks is not such an issue), or inserting a stack frame. Does the "stack squeezing" happen during garbage collection? Thanks, Peter.

2009/4/20 Peter Hercek
Simon Marlow wrote:
Peter Hercek wrote:
The proposed meaning for :next
Lets mark dynamic stack size at a breakpoint (at which we issue :next) as breakStackSize and its selected expression as breakSpan. Then :next would single step till any of these is true: 1) current dynamic stack size is smaller than breakStackSize 2) current dynamic stack size is equal to breakStackSize and the current selected expression is not a subset of breakSpan
So what happens if the stack shrinks and then grows again between two breakpoints? Presumably :next wouldn't notice.
Yes, if there is no breakpoint in between I would not notice. I did not expect this can happen though. I thought that to add a frame on the stack this must be done within an expression (which is going to be forced) and the expression should be a breakpoint location. If it is so negligible that it does not have a breakpoint location associated then even the things it is calling should be negligible. Where is an error in this?
You can't assume that there are breakpoints everywhere. Compiled code doesn't have breakpoints, for example. Even in interpreted code, there is compiler-generated code that doesn't have breakpoints in it, and after transformations breakpoints may have moved around.
I think you'd be better off implementing this with a special stack frame, so that you can guarantee to notice when the current "context" has been exited.
This would be robust but I do not have knowledge to do it yet. If I understand you correctly this means that before executing a BCO which we are going to break at, we must prepare for a possible :next. So regardless whether :next is going to be issued by the user or not we would add a frame which represents a return to a function which: a) if user issued :next it enables all breakpoints so that we stop at the next breakpoint b) if user did not issue a break it would not do anything (just return)
Yes, exactly. Although we have to worry about stack growth: we don't want the addition of a new stack frame to change constant stack-usage into linear stack-usage, so perhaps we would have to avoid pushing these frames directly on top of each other.
We could decide not to insert the frame when we are only tracing. But if I would want to track a simulated dynamic stack I would need to insert this stack frame at the start of each breakpoint (when dynamic stack tracing would be on). Does not sound that good.
I hope the above would make good sense but I do not really know since maybe rts does some funny things with stack sometimes. If you think the proposed behavior is garbage let me know why so that I do not waste more time with this :)
Yes the RTS does do "funny thing" with the stack sometimes. The stack can shrink as a result of adjacent update frames being coalesced ("stack squeezing").
OK, so it looks like either switching off the squeezing (if shrinking and growing stack between breakpoints/ticks is not such an issue), or inserting a stack frame. Does the "stack squeezing" happen during garbage collection?
It can happen any time, so yes. Cheers, Simon
participants (2)
-
Peter Hercek
-
Simon Marlow