Hello, GHC hackers!
 
Reposting this message from haskell-café, because it seems like this mailing list is closer to the subject.
 
For the last 2 days I’m debugging something that looks like a space leak. My program is a rather long-running network service. When I first used heap profiler I was a bit confused with its output that showed that my program was running for only about 0.3-0.4 seconds when I launched it idling for about a minute.
 
That was confusing enough for me to dig into GHC sources trying to figure out the reason behind this and I found that values of the sample times are taken from the mut_user_time function, which returns a time that process spent in user-mode code (outside the kernel) minus time spent on garbage collection.
 
This introduces great amount of unpredictable non-linearity of the heap profile graph x axis, which I personally consider very counterintuitive.
 
So my question is does anybody know why is it done this way? Wouldn’t it be better if x axis would just show a time elapsed since the process started?
 
Kind regards,
  Anton.