
#13360: Add a flag to enable inferring HasCallStack constraints -------------------------------------+------------------------------------- Reporter: gridaphobe | Owner: (none) Type: feature request | Status: new Priority: normal | Milestone: 8.4.1 Component: Compiler | Version: 8.0.1 Resolution: | Keywords: Operating System: Unknown/Multiple | Architecture: | Unknown/Multiple Type of failure: None/Unknown | Test Case: Blocked By: | Blocking: Related Tickets: | Differential Rev(s): Wiki Page: | -------------------------------------+------------------------------------- Comment (by gridaphobe): Thanks for digging in to this! Now that we can see there is an impact for real-world code, I'd suggest stepping back to the simpler examples like the `loop` benchmark I linked earlier, to determine what's causing the overhead. HasCallStack **should** be equivalent to an extra argument that builds up a list of SrcLocs (technically `(String, SrcLoc)` pairs). So my first experiment would probably be to write another benchmark that implements HasCallStack in user code. These two versions should perform the same, do they? If not, maybe GHC is doing something silly when it generates the HasCallStack code. Next I would try shrinking the contents of the user-level CallStack, instead of `(String, SrcLoc)` make it a list of `()`. This should measure the cost of pushing a new item onto the stack. Perhaps that cost is proportional to the size of the item (I wouldn't think so since the item should be a thunk, but it's worth checking). -- Ticket URL: http://ghc.haskell.org/trac/ghc/ticket/13360#comment:23 GHC http://www.haskell.org/ghc/ The Glasgow Haskell Compiler