Re: [GHC] #2140: cpuTimePrecision is wrong (was: cpuTimePrecision is wrong for me on Windows (XP))

#2140: cpuTimePrecision is wrong -------------------------------------+------------------------------------- Reporter: guest | Owner: Type: bug | Status: new Priority: lowest | Milestone: Component: Core Libraries | Version: 6.8.2 Resolution: | Keywords: Operating System: Unknown/Multiple | Architecture: | Unknown/Multiple Type of failure: None/Unknown | Test Case: Blocked By: | Blocking: Related Tickets: | Differential Rev(s): Wiki Page: | -------------------------------------+------------------------------------- Changes (by thomie): * cc: ekmett (added) * failure: => None/Unknown * architecture: x86_64 (amd64) => Unknown/Multiple * milestone: 8.0.1 => * os: Windows => Unknown/Multiple Comment: The module `CPUTime` is part of the [https://www.haskell.org/onlinereport/cputime.html Haskell 98 report] (but not the Haskell 2010 report). Computation `getCPUTime` returns the number of picoseconds of CPU time used by the current program. The precision of this result is given by `cpuTimePrecision`. This is the smallest measurable difference in CPU time that the implementation can record, and is given as an integral number of picoseconds. == getCPUTime ==
getCPUTime always returns a multiple of 15625000000
Does anyone happen to know where we can get real values for this for any
http://stackoverflow.com/a/29164665 explains this as follows: the clock is updated 64 times per second, at the clock tick interrupt. Or in other words 1 / 64 = 0.015625 between ticks And `0.015625 seconds = 15625000000 picoseconds`. == cpuTimePrecision == The current implementation of `cpuTimePrecision` looks at the value of `clockTicks = clk_tck()` (see #7519), converted to picoseconds. On my Linux system, `clk_tck` calls `sysconf(_SC_CLK_TCK)` and always returns `100` (see `libraries/base/cbits/sysconf.c`, and note that `CLK_TCK` is only defined when `#undef __USE_XOPEN2K`, see `/usr/include/time.h` and `/usr/include/x86_64-linux-gnu/bits/time.h`). On Windows, `clk_tck` seems to always return `CLK_TCK = 1000` (see `./opt/ghc-7.10.3/mingw/x86_64-w64-mingw32/include/time.h`). These values don't seem related in any way to the precision of `getCPUTime`. From `man sysconf`: clock ticks - _SC_CLK_TCK The number of clock ticks per second. The corresponding vari‐ able is obsolete. It was of course called CLK_TCK. (Note: the macro CLOCKS_PER_SEC does not give information: it must equal 1000000.) According to this [http://stackoverflow.com/questions/19919881/sysconf-sc- clk-tck-what-does-it-return?rq=1#comment29641173_19919970 stackoverflow comment]: it's the number of times the timer interrupts the CPU for scheduling and other tasks, 100Hz is a common value, higher frequency equals higher timer resolution and more overhead. == Solution ? == platforms? I don't know, but on Windows, you can supposedly use [https://msdn.microsoft.com/en- us/library/windows/desktop/ms724394%28v=vs.85%29.aspx GetSystemTimeAdjustment]. Some more discussion here: http://lists.ntp.org/pipermail/questions/2011-September/030583.html. But maybe we should just give up, since `CPUTime` is not part of the Haskell report anymore. Keep the function `cpuTimePrecision` for backward compatibility, but change the docstring to say what it really does: return the number of picoseconds between clock ticks. This issue was also discussed in this thread: https://mail.haskell.org/pipermail/libraries/2008-February/009305.html -- Ticket URL: http://ghc.haskell.org/trac/ghc/ticket/2140#comment:17 GHC http://www.haskell.org/ghc/ The Glasgow Haskell Compiler
participants (1)
-
GHC