
Hello folks I'd like to know, how do ghc developers and users feel about the debugger? I'm using it to some extent on ghc 6.8.2 and find it useful. But I'm getting an impression that it's not too stable. I'm not filing any bug report yet, just want to know how it feels for others. I used to make it panic. I think, it was due to existential types. Now I see it mess up the list of bindings in a funny way. For example, in a previous trace session I had a variable, say, prev. It was bound during pattern matching in a function, say, prevFunc. Now I'm having another trace session, actually stepping from the very beginning. A couple of steps after the beginning, prev suddenly appears in the bindings where prevFunc absolutely has not yet been invoked. It's completely unrelated. In 'show bindings' prev has a wrong type - SomeType (it's ADT). Its real type (when in scope) is [t]. I ask 'length prev' and get 0 :) So, what is your impression of the debugger? -- Daniil Elovkov

Daniil Elovkov wrote:
I'd like to know, how do ghc developers and users feel about the debugger?
I'm using it to some extent on ghc 6.8.2 and find it useful. But I'm getting an impression that it's not too stable. I'm not filing any bug report yet, just want to know how it feels for others.
I used to make it panic. I think, it was due to existential types.
Now I see it mess up the list of bindings in a funny way. For example, in a previous trace session I had a variable, say, prev. It was bound during pattern matching in a function, say, prevFunc. Now I'm having another trace session, actually stepping from the very beginning. A couple of steps after the beginning, prev suddenly appears in the bindings where prevFunc absolutely has not yet been invoked. It's completely unrelated. In 'show bindings' prev has a wrong type - SomeType (it's ADT). Its real type (when in scope) is [t]. I ask 'length prev' and get 0 :)
So, what is your impression of the debugger?
Please file bug reports! Cheers, Simon

Daniil Elovkov wrote:
I'd like to know, how do ghc developers and users feel about the debugger?
Sometimes it is better/quicker than "printf debugging" :)
Now I see it mess up the list of bindings in a funny way. For example, in a previous trace session I had a variable, say, prev. It was bound during pattern matching in a function, say, prevFunc. Now I'm having another trace session, actually stepping from the very beginning. A couple of steps after the beginning, prev suddenly appears in the bindings where prevFunc absolutely has not yet been invoked. It's completely unrelated. In 'show bindings' prev has a wrong type - SomeType (it's ADT). Its real type (when in scope) is [t]. I ask 'length prev' and get 0 :)
It is supposed to show only free variables in the selected expression. I'm sure I had cases when I was able to access variables which were not free in the selected expression but which would have been in scope if used in the selected expression. The values available seemed correct (contrary to your case). I thought it was a step to get all the variables in scope to be visible but later I learned it is not feasible and my lucky experience was probably a bug. If I encounter it again should I fill a bug report? I mean: is it really a bug? Peter.

Peter Hercek wrote:
Daniil Elovkov wrote:
I'd like to know, how do ghc developers and users feel about the debugger?
Sometimes it is better/quicker than "printf debugging" :)
Now I see it mess up the list of bindings in a funny way. For example, in a previous trace session I had a variable, say, prev. It was bound during pattern matching in a function, say, prevFunc. Now I'm having another trace session, actually stepping from the very beginning. A couple of steps after the beginning, prev suddenly appears in the bindings where prevFunc absolutely has not yet been invoked. It's completely unrelated. In 'show bindings' prev has a wrong type - SomeType (it's ADT). Its real type (when in scope) is [t]. I ask 'length prev' and get 0 :)
It is supposed to show only free variables in the selected expression. I'm sure I had cases when I was able to access variables which were not free in the selected expression but which would have been in scope if used in the selected expression. The values available seemed correct (contrary to your case). I thought it was a step to get all the variables in scope to be visible but later I learned it is not feasible and my lucky experience was probably a bug. If I encounter it again should I fill a bug report? I mean: is it really a bug?
At the least it's suspicious, and could be a bug. Please do report it, as long as you can describe exactly how to reproduce the problem. Cheers, Simon

It is supposed to show only free variables in the selected expression. I'm sure I had cases when I was able to access variables which were not free in the selected expression but which would have been in scope if used in the selected expression. The values available seemed correct (contrary to your case). I thought it was a step to get all the variables in scope to be visible but later I learned it is not feasible and my lucky experience was probably a bug. If I encounter it again should I fill a bug report? I mean: is it really a bug?
Perhaps someone could help me to understand how the debugger is supposed to be used, as I tend to have this problem, too: - when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is) - with only the current expression and some of its free variables accessible, I seem to be unable to use the debugger effectively (it would be less of a problem if the current expression could be displayed in partially reduced source form, instead of partially defined value form) Currently, I only use the debugger rarely, and almost always have to switch to trace/etc to pin down what it is hinting at. What is the intended usage pattern that makes the debugger more effective/ convenient than trace/etc, and usable without resorting to the latter? Claus

Claus Reinke wrote:
It is supposed to show only free variables in the selected expression. I'm sure I had cases when I was able to access variables which were not free in the selected expression but which would have been in scope if used in the selected expression. The values available seemed correct (contrary to your case). I thought it was a step to get all the variables in scope to be visible but later I learned it is not feasible and my lucky experience was probably a bug. If I encounter it again should I fill a bug report? I mean: is it really a bug?
Perhaps someone could help me to understand how the debugger is supposed to be used, as I tend to have this problem, too:
- when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is)
I don't understand what you mean here - surely in order to "reconstruct what the current expression is" you only need to know the values of the free variables of that expression? Also I don't understand what you mean by the "next enclosing scope". Could you give an example?
- with only the current expression and some of its free variables accessible, I seem to be unable to use the debugger effectively (it would be less of a problem if the current expression could be displayed in partially reduced source form, instead of partially defined value form)
I don't know what "partially defined value form" is. The debugger shows the source code as... source code! I think I know what you mean though. You want the debugger to substitute the values of free variables in the source code, do some reductions, and show you the result. Unfortunately this is quite hard to implement in GHCi (but I agree it might be useful), because GHCi is not a source-level interpreter.
Currently, I only use the debugger rarely, and almost always have to switch to trace/etc to pin down what it is hinting at. What is the intended usage pattern that makes the debugger more effective/ convenient than trace/etc, and usable without resorting to the latter?
Set a breakpoint on an expression that has the variable you're interested in free, and display its value. If your variable isn't free in the expression you want to set a breakpoint on, then you can add a dummy reference to it. Cheers, Simon

- when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is)
I don't understand what you mean here - surely in order to "reconstruct what the current expression is" you only need to know the values of the free variables of that expression? Also I don't understand what you mean by the "next enclosing scope". Could you give an example?
If <expr> is the current expression f a b = <expr> -- 'a' and 'b' should be inspectable, independent of <expr> g x y = <expr> where a = .. -- 'a' should be inspectable (ideally, 'x'/'y', too)
I don't know what "partially defined value form" is. The debugger shows the source code as... source code!
:list shows source code (without substitution) :print shows partially defined values (partially defined value form, with thunks) :force tries to evaluate, replacing uninspectable thunks with values There is no way to see source with substituted variables, or unevaluated expressions. So there's an information gap between source and values. It is in this gap that many of the interesting/buggy reductions happen (at least in my code;-).
I think I know what you mean though. You want the debugger to substitute the values of free variables in the source code, do some reductions, and show you the result. Unfortunately this is quite hard to implement in GHCi (but I agree it might be useful), because GHCi is not a source-level interpreter.
Yes, that is what I've been missing ever since I moved from my previous functional language to Haskell. But I agree that it would be hard to add as an afterthought. I have high hopes that the GHCi debugger + scope might go most of the way in terms of practical needs. I really think the GHCi debugger is a great step in the right direction, but without being able to inspect the context of the current expression (about which we can only inspect its value, evaluation status, and free variables) it is stuck short of fullfiling its potential. Unless I'm using it wrongly, which is why I was asking.
Currently, I only use the debugger rarely, and almost always have to switch to trace/etc to pin down what it is hinting at. What is the intended usage pattern that makes the debugger more effective/ convenient than trace/etc, and usable without resorting to the latter?
Set a breakpoint on an expression that has the variable you're interested in free, and display its value. If your variable isn't free in the expression you want to set a breakpoint on, then you can add a dummy reference to it.
Adding dummy references means modifying the source, at which point
trace variants do nearly as well. The remaining value of the debugger is
that I can decide at debug time whether or not I want to see the traced
value. Without modifying the source code, there may be no way to step
forward to any expression that has the interesting variables free.
In small examples, it is usually easy to work around the debugger limitations,
so I'll have to remember to make a note next time I encounter the issues
in real life debugging. However, even in small examples, one can see hints
of the issues. Consider this code and session:
f x y z | x

Claus Reinke wrote:
f x y z | x
--------------------- $ /cygdrive/d/fptools/ghc/ghc/stage2-inplace/ghc.exe --interactive Debug.hs -ignore-dot-ghci GHCi, version 6.11.20081122: http://www.haskell.org/ghc/ :? for help Loading package ghc-prim ... linking ... done. Loading package integer ... linking ... done. Loading package base ... linking ... done. Loading package ffi-1.0 ... linking ... done. [1 of 1] Compiling Main ( Debug.hs, interpreted ) Ok, modules loaded: Main. *Main> :break f Breakpoint 0 activated at Debug.hs:(1,0)-(2,24) *Main> f 3 2 1 Stopped at Debug.hs:(1,0)-(2,24) _result :: a = _ [Debug.hs:(1,0)-(2,24)] *Main> :list vv 1 f x y z | x
:step Stopped at Debug.hs:1:10-12 _result :: a = _ [Debug.hs:1:10-12] *Main> :list 1 f x y z | x
Looks like a bug to me, At this location the x a y should be
observable and cought in trace history.
It actually looks very similar to bug I reported here:
http://hackage.haskell.org/trac/ghc/ticket/2740
Notice that if you write your function like this (as I mostly do):
f x y z =
if x

Hello I think apart from some other notes the concern here, as started by Peter when he joined the thread, can be concisely summarised like this: it would be good if the set of bound variables were equal to the set of variables syntactically in scope Apparently, Simon has already explained that that would bear a lot of overhead in the thread "could ghci debugger search for free variables better?" in the end of October - beginning of November. The reason for that overhead, as I recall the nature of execution in ghc, should be that closures only have pointers to their free variables. Indeed it's only them that is needed for _computing_ the closure. And when we enter a closure we 'forgive' all the outer scope, which is nevertheless syntactically in scope. I understand the reasons, but I think it's a very serious drawback. I'd love to have a Haskell debugger which is no worse than traditional imperative debuggers, but this seems like a more or less fundamental obstacle. 2 workarounds have been suggested 1. Add references to the identifier of interest in the code, so that it becomes a free variable in the expression where we will stop. This is hardly compatible with the 'no worse' part. 2. Go back in history to the expression where the identifier of interest is free and thus bound. For example, in the same October/November thread Peter suggests :tracelocal, which would only record the history of evaluation inside this function. Then, when walking back in history looking for the identifier of interest, one would not get lost in many-many evaluations happening lazily outside of this function and they would not overflow the history slots. A refinement of :tracelocal could be :tracedirect (or something) that would save the history not anywhere within the given function but only within parents, so to say. For example, f x y = let v = large_exp_v w = large_exp_w in (w x) + (v y) If we inspect (v y) then (w x) + (v y) is the parent. In this parent w becomes bound. We don't have to record the history for large_exp_v and large_exp_w and we still can get bound all variables which are syntactically in scope. Also, I think about having a visual debugger (say in Eclipse) that would add value the the ghci debugger. Particularly it could maintain the illusion that all vars in scope and really available, by going back and forth in history. It could do quite a number of other things as well. Anyway, even with :tracelocal and :tracedirect the history size limit still can be quite a limitation (yes, tautology). What will happen if we don't limit its size? No memory will ever be released? Claus Reinke wrote:
- when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is)

Daniil Elovkov wrote:
A refinement of :tracelocal could be :tracedirect (or something) that would save the history not anywhere within the given function but only within parents, so to say. For example,
This looks like what I thought of as searching for values in dynamic stack (explained in my response to Pepe Iborra in this thread). I just did not ask for it with a new ticket since: * I think it is already requested by some other ticket * if you compile with -Wall then :tracelocal should have the same information and only rarely name collision happens so automatic tracelocal trace search should return what you are looking for too and when needed it reruns more ... that is if the function is short enough to fit in the tracelocal history queue The ticket actually has two almost independent parts: * Adding tracelocal trace. * Adding the automatic search for symbols in the trace and the trace in the search could be also some other kind of trace like (e.g. dynamic stack). This would not be that useful though since the names at higher levels in stack are typically different. So to make it good it would require matching formal arguments to expressions on higher level and evaluating them, not that easy to do as simple tracelocal search which is just based on stupid string comparison. Peter.

Peter Hercek wrote:
Daniil Elovkov wrote:
A refinement of :tracelocal could be :tracedirect (or something) that would save the history not anywhere within the given function but only within parents, so to say. For example,
This looks like what I thought of as searching for values in dynamic stack (explained in my response to Pepe Iborra in this thread).
Yes, first when I saw that your message I also thought "Hey, Peter has already suggested it!" :) But now I see that we're talking about slightly different things. Consider fun x y = let f1 = ... (f2 x) ... -- f1 calls f2 f2 = exp_f2 in ... Now, if we're at exp_f2 then 'dynamic stack' in your terminology includes (f2 x) since it's the caller and all f1 as well. On the other hand, :tracedirect that I suggested would not include f1, as it's not a direct ancestor. And for the purpose of binding variables which are syntactically in scope, it is indeed not needed. :tracedirect would be sufficient. Also, I think that :tracedirect can be easily implemented based only on simple syntactic inclusion relation.
I just did not ask for it with a new ticket since: * I think it is already requested by some other ticket * if you compile with -Wall then :tracelocal should have the same information and only rarely name collision happens so automatic tracelocal trace search should return what you are looking for too and when needed it reruns more ... that is if the function is short enough to fit in the tracelocal history queue
The ticket actually has two almost independent parts: * Adding tracelocal trace. * Adding the automatic search for symbols in the trace and the trace in the search could be also some other kind of trace like (e.g. dynamic stack). This would not be that useful though since the names at higher levels in stack are typically different. So to make it good it would require matching formal arguments to expressions on higher level and evaluating them, not that easy to do as simple tracelocal search which is just based on stupid string comparison.

On Mon, Nov 24, 2008 at 5:28 PM, Daniil Elovkov
Peter Hercek wrote:
Daniil Elovkov wrote:
A refinement of :tracelocal could be :tracedirect (or something) that would save the history not anywhere within the given function but only within parents, so to say. For example,
This looks like what I thought of as searching for values in dynamic stack (explained in my response to Pepe Iborra in this thread).
Yes, first when I saw that your message I also thought "Hey, Peter has already suggested it!" :)
But now I see that we're talking about slightly different things. Consider
fun x y = let f1 = ... (f2 x) ... -- f1 calls f2 f2 = exp_f2 in ...
Now, if we're at exp_f2 then 'dynamic stack' in your terminology includes (f2 x) since it's the caller and all f1 as well.
On the other hand, :tracedirect that I suggested would not include f1, as it's not a direct ancestor. And for the purpose of binding variables which are syntactically in scope, it is indeed not needed. :tracedirect would be sufficient.
Also, I think that :tracedirect can be easily implemented based only on simple syntactic inclusion relation.
I am not convinced this would not imply the same problems as simply storing all the variables in scope at a breakpoint. Consider this slightly more contorted example: fun x y = let f1 = ... (f2 x) ... -- f1 calls f2 f2 x = x * 2 in case x of 1 -> f2 0 _ -> f2 (f1 y) g x = let z = (some complex computation) in z `div` x main = print (g (fun 1 2)) This is a classical example of why laziness gets in the way of debugging. Now, when (f2 0) gets finally evaluated and throws a divByZero error, x and y are nowhere to be found. Since we do not have a real dynamic stack, it is difficult to say where their values should be stored. The only place I can think of is at the breakpoint itself, but then as Simon said this poses a serious efficiency problem.

fun x y = let f1 = ... (f2 x) ... -- f1 calls f2 f2 x = x * 2 in case x of 1 -> f2 0 _ -> f2 (f1 y)
g x = let z = (some complex computation) in z `div` x
main = print (g (fun 1 2))
This is a classical example of why laziness gets in the way of debugging. Now, when (f2 0) gets finally evaluated and throws a divByZero error, x and y are nowhere to be found. Since we do not have a real dynamic stack, it is difficult to say where their values should be stored. The only place I can think of is at the breakpoint itself, but then as Simon said this poses a serious efficiency problem.
Isn't that a case of premature optimization? I will happily compain about performance issues later, after the debugger turns out to be merely too slow!-) Currently, the issue is one of it not working as well as it could, which seems rather more important to me (think of it as infinitely too slow for my needs:-). Once it comes to performance issues (btw, is there a debugger home page on the wiki, where issues/FAQs like "why can't I have scope contexts" are documented?), an obvious compromise would be to state explicitly where to preserve scope information - something like: :break fun{x,y}/{f1,f2}/f2{x} would set a breakpoint on f2, associating with it information about the static scope context including only the named names (items between // define the path towards the name we want to break, level by level; additional names can be added in {} at each level), without affecting other parts of the program. Claus

Claus Reinke wrote:
fun x y = let f1 = ... (f2 x) ... -- f1 calls f2 f2 x = x * 2 in case x of 1 -> f2 0 _ -> f2 (f1 y)
g x = let z = (some complex computation) in z `div` x
main = print (g (fun 1 2))
This is a classical example of why laziness gets in the way of debugging. Now, when (f2 0) gets finally evaluated and throws a divByZero error, x and y are nowhere to be found. Since we do not have a real dynamic stack, it is difficult to say where their values should be stored. The only place I can think of is at the breakpoint itself, but then as Simon said this poses a serious efficiency problem.
Isn't that a case of premature optimization? I will happily compain about performance issues later, after the debugger turns out to be merely too slow!-)
No, it's a real problem. If we retained all the variables in scope at every breakpoint, GHCi would grow a whole bunch of space leaks. It's pretty important that adding debugging shouldn't change the space behaviour of the program. Of course, constant factors are fine, but we're talking asymptotic changes here. Now perhaps it would be possible to arrange that the extra variables are only retained if they are needed by a future breakpoint, but that's tricky (conditional stubbing of variables), and :trace effectively enables all breakpoints so you get the space leaks back. A similar argument applies to keeping the dynamic stack. The problem with the dynamic stack is that it doesn't look much like you expect, due to tail-calls. However, I think it would be good to let the user browse the dynamic stack (somehow, I haven't thought through how hard this would be). But what I'd really like is to give the user access to the *lexical* stack, by re-using the functionality that we already have for tracking the lexical stack in the profiler. See http://hackage.haskell.org/trac/ghc/wiki/ExplicitCallStack
(btw, is there a debugger home page on the wiki, where issues/FAQs like "why can't I have scope contexts" are documented?)
No, please by all means start one. Cheers, Simon

Simon Marlow wrote:
A similar argument applies to keeping the dynamic stack. The problem with the dynamic stack is that it doesn't look much like you expect, due to tail-calls.
Do you think people expect the tail-calls to add a stack frame to the dynamic stack or is there something more complicated? I would expect a tail-call to overwrite the last stack frame on the dynamic stack - just like imperative loops, which is what they correspond to. Dynamic stack should correspond closely to the stack which overflows when we get the "stack overflow exception". That is what I would expect. If somebody wants the history of tail-calls he can check the trace information, which should not be a problem especially if some filtering (like tracelocal) is possible. Peter.

Peter Hercek wrote:
Simon Marlow wrote:
A similar argument applies to keeping the dynamic stack. The problem with the dynamic stack is that it doesn't look much like you expect, due to tail-calls.
Do you think people expect the tail-calls to add a stack frame to the dynamic stack or is there something more complicated?
Right, I think they expect exactly that, and it'll confuse people that some stack frames are "missing". Often it's not clear which calls are tail-calls and which are not. Mind you, I think the fact that it's a dynamic call stack rather than a lexical call stack is likely to confuse the same set of users even more.
I would expect a tail-call to overwrite the last stack frame on the dynamic stack - just like imperative loops, which is what they correspond to. Dynamic stack should correspond closely to the stack which overflows when we get the "stack overflow exception". That is what I would expect.
Fair enough - that at least is a data point suggesting that providing the dynamic call stack with no special provision for tail-calls would be useful to some people. Cheers, Simon

Simon Marlow wrote:
Peter Hercek wrote:
Simon Marlow wrote:
A similar argument applies to keeping the dynamic stack. The problem with the dynamic stack is that it doesn't look much like you expect, due to tail-calls.
Do you think people expect the tail-calls to add a stack frame to the dynamic stack or is there something more complicated?
Right, I think they expect exactly that, and it'll confuse people that some stack frames are "missing". Often it's not clear which calls are tail-calls and which are not. Mind you, I think the fact that it's a dynamic call stack rather than a lexical call stack is likely to confuse the same set of users even more.
That is a good point, I might not see at the first look whether it is a tail call or not. Which reminds me that if it is implemented the way I expected then stack frames which are tail calls should be marked that way so that it is possible to see at the first look whether the given stack frame is a tail-call or not. If it will be a lexical call stack I'm curious how the pruning will be done so that we do not miss stack frames when we return from some code which corresponds to an imperative loop. Maybe a top limit on the number of stored lexical frames in one imperative (call-recursive) frame? From my point of view this could work well enough if it can print something like "and here there were some lexical frames pruned and we are going one dynamic frame higher". My reasons why I want to see it with tail-calls collapsed into one stack frame is that I need debugger to figure out why something does not work so I should see what it looks like close to the execution model where the bugs actually present themselves. I believe that collapsed tail-calls is not such a big deal if there is a way to filter trace history (like tracelocal idea or something similar) or maybe having a really long trace history. Hmmm, maybe it would be even possible to recover last part of the lexical stack from the dynamic stack and the trace history. I discussed a bit with Pepe Iborra about how to build the dynamic (lazy) stack from a trace on the fly. Something like whenever we reduce an expression we would prune the corresponding node in the trace. Such a pruned trace should correspond to the dynamic stack. (If I do not miss something which I probably do.) And moreover if we record the expressions (their source code range) we just pruned and the result they reduced to then we can show it with some command like :showexpressionresults. This would provide access to unnamed values which could have been sent to a lower level calls as inputs. And that is part of the problem we discussed in this thread. Anyway thank you, Clause Reinke and Pepe Iborra for all the great help with ghci ... I'm still learning how to script ghci debugger better. I hope I can make it better than "printf debugging" with the scripts :-) Peter.

No, it's a real problem. If we retained all the variables in scope at every breakpoint, GHCi would grow a whole bunch of space leaks. It's pretty important that adding debugging shouldn't change the space behaviour of the program. Of course, constant factors are fine, but we're talking asymptotic changes here.
Now perhaps it would be possible to arrange that the extra variables are only retained if they are needed by a future breakpoint, but that's tricky (conditional stubbing of variables), and :trace effectively enables all breakpoints so you get the space leaks back.
Then how about my suggestion for selectively adding lexical scope to
breakpoints? I'd like to be able to say
:break

Claus Reinke wrote:
No, it's a real problem. If we retained all the variables in scope at every breakpoint, GHCi would grow a whole bunch of space leaks. It's pretty important that adding debugging shouldn't change the space behaviour of the program. Of course, constant factors are fine, but we're talking asymptotic changes here.
Now perhaps it would be possible to arrange that the extra variables are only retained if they are needed by a future breakpoint, but that's tricky (conditional stubbing of variables), and :trace effectively enables all breakpoints so you get the space leaks back.
Then how about my suggestion for selectively adding lexical scope to breakpoints? I'd like to be able to say
:break
{names} and have GHCi make the necessary changes to keep {names} available for inspection when it hits that breakpoint.
The only easy way to do that is to recompile the module that contains the breakpoint. To do it without recompiling is about as hard as doing what I suggested above, because it involves a similar mechanism (being able to selectively retain the values of free variables). Cheers, Simon

Simon Marlow wrote:
Claus Reinke wrote:
Then how about my suggestion for selectively adding lexical scope to breakpoints? I'd like to be able to say
:break
{names} and have GHCi make the necessary changes to keep {names} available for inspection when it hits that breakpoint.
The only easy way to do that is to recompile the module that contains the breakpoint. To do it without recompiling is about as hard as doing what I suggested above, because it involves a similar mechanism (being able to selectively retain the values of free variables).
sure, but GHCi recompiling the module maybe takes less than a second, whereas going and modifying the source-code in an editor takes orders of magnitude more time! Is there something fundamentally wrong with recompiling an interpreted module? -Isaac

Isaac Dupree wrote:
Simon Marlow wrote:
Claus Reinke wrote:
Then how about my suggestion for selectively adding lexical scope to breakpoints? I'd like to be able to say
:break
{names} and have GHCi make the necessary changes to keep {names} available for inspection when it hits that breakpoint.
The only easy way to do that is to recompile the module that contains the breakpoint. To do it without recompiling is about as hard as doing what I suggested above, because it involves a similar mechanism (being able to selectively retain the values of free variables).
sure, but GHCi recompiling the module maybe takes less than a second, whereas going and modifying the source-code in an editor takes orders of magnitude more time! Is there something fundamentally wrong with recompiling an interpreted module?
One problem with recompiling is that you lose any execution context - normally you can set new breakpoints during execution, but recompiling would force you to start the debugging session again. Also any bindings made on the command-line will be lost. However, I agree, recompiling is still better than nothing. Cheers, Simon

Claus Reinke wrote:
Consider this code and session:
f x y z | x
...
Things to note:
- when reaching the breakpoint "in" 'f', one isn't actually in 'f' yet - nothing about 'f' can be inspected - at no point in the session was 'x' inspectable, even though it is likely to contain information needed to understand 'f', especially when we are deep in a recursion of a function that can be called from many places; this information doesn't happen to be needed in the current branch, but debugging the current expression always happens in a context, and accessing information about this context is what the GHCi debugger doesn't seem to support well
In this particular example, the second item is most likely a bug (the free variables of the guard were never offered for inspection).
Indeed it was a bug, the same as #2740, and I've just fixed it. Thanks for boiling it down to a nice small example. Cheers, Simon

Simon Marlow wrote:
Claus Reinke wrote:
Perhaps someone could help me to understand how the debugger is supposed to be used, as I tend to have this problem, too:
- when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is)
I don't understand what you mean here - surely in order to "reconstruct what the current expression is" you only need to know the values of the free variables of that expression? Also I don't understand what you mean by the "next enclosing scope". Could you give an example?
Maybe what Claus means is that he would like to see the dynamic stack and be able to traverse it and at each location in the dynamic stack he could investigate the free variables in the expression (corresponding to the dynamic stack slot). I actually considered this as a feature request but I decided that I would like to have this implemented sooner: http://hackage.haskell.org/trac/ghc/ticket/2737
Currently, I only use the debugger rarely, and almost always have to switch to trace/etc to pin down what it is hinting at. What is the intended usage pattern that makes the debugger more effective/ convenient than trace/etc, and usable without resorting to the latter?
Set a breakpoint on an expression that has the variable you're interested in free, and display its value. If your variable isn't free in the expression you want to set a breakpoint on, then you can add a dummy reference to it.
Actually I use it a bit differently mostly because I know (or I'm guessing) where the bug is located and set a breakpoint at a place which is hit just after the wrong decision happens. The "just after wrong decision" requirement is there so that I do not need to have too complicated expressions in the conditional breakpoint. Then I count how many times the breakpoint is hit of find some other expression which is true just before the hit I'm interested in (the hit with bug). I modify the script of the breakpoint so that it stops just before the hit I'm iterested in and restart. After restart the debugger stops at the modified breakpoint and I continue with either :steplocal or :trace. This is so that I have values of previous expressions in the trace history. Then (If I used :trace) I check the values in the trace history to find out why I got at the wrong place. The procedure is quite complicated but I do not know about quicker way to position the debugger at the right place and with the right variable values caught in the trace history. If I would not know the approximate location of the bug then hpc can help. For more simple things "printf debugging" is just enough. Peter.

On Mon, Nov 24, 2008 at 2:03 PM, Peter Hercek
Simon Marlow wrote:
Claus Reinke wrote:
Perhaps someone could help me to understand how the debugger is supposed to be used, as I tend to have this problem, too:
- when I'm at a break point, I'd really like to see the current scope or, if that is too expensive, the next enclosing scope, in full (not only would that tell me what instantiation of my code I'm in, it would also seem necessary if I want to reconstruct what the current expression is)
I don't understand what you mean here - surely in order to "reconstruct what the current expression is" you only need to know the values of the free variables of that expression? Also I don't understand what you mean by the "next enclosing scope". Could you give an example?
Maybe what Claus means is that he would like to see the dynamic stack and be able to traverse it and at each location in the dynamic stack he could investigate the free variables in the expression (corresponding to the dynamic stack slot). I actually considered this as a feature request but I decided that I would like to have this implemented sooner: http://hackage.haskell.org/trac/ghc/ticket/2737
As long as you start with :trace, you can see the dynamic stack with :history, and traverse it with :back. At any point in the stack the free variables are available, or so I believe. What is the missing feature you would like to request in this case?

Pepe Iborra wrote:
On Mon, Nov 24, 2008 at 2:03 PM, Peter Hercek
wrote: Maybe what Claus means is that he would like to see the dynamic stack and be able to traverse it and at each location in the dynamic stack he could investigate the free variables in the expression (corresponding to the dynamic stack slot). I actually considered this as a feature request but I decided that I would like to have this implemented sooner: http://hackage.haskell.org/trac/ghc/ticket/2737
As long as you start with :trace, you can see the dynamic stack with :history, and traverse it with :back. At any point in the stack the free variables are available, or so I believe. What is the missing feature you would like to request in this case?
Hmmm, I believe that dynamic stack is not the same as :trace history. The point is that the trace history shows all the evalueated expressions as they occure in time. But dynamic stack shows only the expressions which evaluation did not finish yet. Example: 1 let fn a = 2 let f x = x + 1 in 3 case f a of 4 1 -> "one" 5 _ -> "" When selected expression is "one" then the trace will contain something like this (just doing it from the top of my head): line 1-5 fn a = ... line 3-5 case f a of ... line 3 f a line 2 let f x = x + 1 in line 2 x + 1 possibly something from outside which forces x and consequently a line 4 "one" But the dynamic stack would contain: line 1-5 fn a = ... line 3-5 case f a of ... line 4 "one" The difference is that the dynamic stack contains only the items which computation is not finished yet. The stuff which was already reduced to a final value is not there any more. This way you could trace the dynamic stack back to see what arguments was your function called with since the arguments must be free variables in the expression which called your function of interest. Of course the same information is in the trace too ... that is if your trace history is long enough and you are willing to search it manually. That is the rason for ticket 2737. I do not want to search it manually! Maybe trace is the dynamic stack and I did not realize what trace contains till now. That would be kind of a shame :-D Peter.
participants (6)
-
Claus Reinke
-
Daniil Elovkov
-
Isaac Dupree
-
Pepe Iborra
-
Peter Hercek
-
Simon Marlow