System.CPUTime and picoseconds

Hi. Just out of curiosity, but why Haskell 98 System.CPUTime library module uses picoseconds instead of, say, nanoseconds? At least on POSIX systems, picoseconds precision is *never* specified. Thanks Manlio Perillo

Manlio Perillo wrote:
Hi.
Just out of curiosity, but why Haskell 98 System.CPUTime library module uses picoseconds instead of, say, nanoseconds?
At least on POSIX systems, picoseconds precision is *never* specified.
I have not idea. But at a guess, I would say that 1 ns is not such a small time interval anymore. The CPU speeds are about 3 GHz, so 0.3 ns per CPU clock. Even the RAM clock in a laptop (e.g. Apple's 17" Mac Pro) is 1066 MHz, so the internal there is just under 1 ns. Whoever picked picoseconds has made it possible to talk about a single clock interval for hardware like this. -- Chris

It was suggested that it should be ns, and I complained that ns would
be obsolete in a while.
What I really wanted was a switch to Double (and just using seconds),
instead we got ps.
At least ps won't get obsolete in a while.
-- Lennart
On Sun, Jan 11, 2009 at 12:06 AM, ChrisK
Manlio Perillo wrote:
Hi.
Just out of curiosity, but why Haskell 98 System.CPUTime library module uses picoseconds instead of, say, nanoseconds?
At least on POSIX systems, picoseconds precision is *never* specified.
I have not idea. But at a guess, I would say that 1 ns is not such a small time interval anymore. The CPU speeds are about 3 GHz, so 0.3 ns per CPU clock. Even the RAM clock in a laptop (e.g. Apple's 17" Mac Pro) is 1066 MHz, so the internal there is just under 1 ns.
Whoever picked picoseconds has made it possible to talk about a single clock interval for hardware like this.
-- Chris
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

wouldn't a double become less and less precise the longer the process is
running?
so Integer sounds like the only datatype that could work here...
and why not do it like in Windows: make two functions, one that returns the
number of CPU ticks, and another that returns the frequency (number of ticks
per second)... This gives you an API that works for whatever clock speed...
On Sun, Jan 11, 2009 at 11:23 AM, Lennart Augustsson wrote: It was suggested that it should be ns, and I complained that ns would
be obsolete in a while.
What I really wanted was a switch to Double (and just using seconds),
instead we got ps.
At least ps won't get obsolete in a while. -- Lennart On Sun, Jan 11, 2009 at 12:06 AM, ChrisK Manlio Perillo wrote: Hi. Just out of curiosity, but why Haskell 98 System.CPUTime library module
uses picoseconds instead of, say, nanoseconds? At least on POSIX systems, picoseconds precision is *never* specified. I have not idea. But at a guess, I would say that 1 ns is not such a
small
time interval anymore. The CPU speeds are about 3 GHz, so 0.3 ns per CPU
clock. Even the RAM clock in a laptop (e.g. Apple's 17" Mac Pro) is 1066
MHz, so the internal there is just under 1 ns. Whoever picked picoseconds has made it possible to talk about a single
clock
interval for hardware like this. --
Chris _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe _______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sun, 2009-01-11 at 16:03 +0100, Peter Verswyvelen wrote:
wouldn't a double become less and less precise the longer the process is running?
so Integer sounds like the only datatype that could work here...
and why not do it like in Windows: make two functions, one that returns the number of CPU ticks, and another that returns the frequency (number of ticks per second)... This gives you an API that works for whatever clock speed...
Let's assume you were joking, but just in case you were not... Clock speed is variable on all modern CPUs. In low power states some CPUs can turn off the clock completely. In some CPUs the clock speed is variable per-core and some can turn off one core without turning off all cores. Relating clock ticks to time is a minefield. Duncan

Let's assume you were joking, but just in case you were not...
Why this hostile tone? I didn't mean to be offensive. Unlike most people in this group, I don't have the brain of an einstein :) So sorry for my stupidity, I just tried to give some feedback...
Clock speed is variable on all modern CPUs. In low power states some CPUs can turn off the clock completely. In some CPUs the clock speed is variable per-core and some can turn off one core without turning off all cores. Relating clock ticks to time is a minefield.
Yes I know that. I just meant having something that returns a number of ticks (not real CPU ticks, mea culpa) and the ticks itself. That would make it unit independent no? Anyway, this is purely theoretical, as indeed picoseconds would do for all practical purposes I guess

On Sun, 2009-01-11 at 21:41 +0100, Peter Verswyvelen wrote:
Let's assume you were joking, but just in case you were not...
Why this hostile tone? I didn't mean to be offensive.
I'm sorry, I did not intend it to sound hostile. I mis-interpreted the "..." at the end of your sentence. Some people sometimes use this to indicate they're making a joke suggestion.
Clock speed is variable on all modern CPUs. In low power states some CPUs can turn off the clock completely. In some CPUs the clock speed is variable per-core and some can turn off one core without turning off all cores. Relating clock ticks to time is a minefield.
Yes I know that. I just meant having something that returns a number of ticks (not real CPU ticks, mea culpa) and the ticks itself. That would make it unit independent no?
I guess so. Duncan

A double has 53 bits in the mantissa which means that for a running
time of about 24 hours you'd still have picoseconds. I doubt anyone
cares about picoseconds when the running time is a day.
That's why I think a Double is a good choice, it adapts to the time
scale involved.
-- Lennart
On Sun, Jan 11, 2009 at 4:03 PM, Peter Verswyvelen
wouldn't a double become less and less precise the longer the process is running? so Integer sounds like the only datatype that could work here... and why not do it like in Windows: make two functions, one that returns the number of CPU ticks, and another that returns the frequency (number of ticks per second)... This gives you an API that works for whatever clock speed...
On Sun, Jan 11, 2009 at 11:23 AM, Lennart Augustsson
wrote: It was suggested that it should be ns, and I complained that ns would be obsolete in a while. What I really wanted was a switch to Double (and just using seconds), instead we got ps. At least ps won't get obsolete in a while.
-- Lennart
On Sun, Jan 11, 2009 at 12:06 AM, ChrisK
wrote: Manlio Perillo wrote:
Hi.
Just out of curiosity, but why Haskell 98 System.CPUTime library module uses picoseconds instead of, say, nanoseconds?
At least on POSIX systems, picoseconds precision is *never* specified.
I have not idea. But at a guess, I would say that 1 ns is not such a small time interval anymore. The CPU speeds are about 3 GHz, so 0.3 ns per CPU clock. Even the RAM clock in a laptop (e.g. Apple's 17" Mac Pro) is 1066 MHz, so the internal there is just under 1 ns.
Whoever picked picoseconds has made it possible to talk about a single clock interval for hardware like this.
-- Chris
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

An Double or Int64 are both 8 bytes and counts with picoseconds precision for 2.5 hours to 106 days. Going to 12 byte integer lets you count to 3.9 billion years (signed). Going to 16 byte integer is over 10^38 years. Lennart Augustsson wrote:
A double has 53 bits in the mantissa which means that for a running time of about 24 hours you'd still have picoseconds. I doubt anyone cares about picoseconds when the running time is a day.
The above is an unfounded claim about the rest of humanity.
That's why I think a Double is a good choice, it adapts to the time scale involved.
Let's compute:
tTooBig :: Double tTooBig = 2^53
main = do print (tTooBig == 1+ tTooBig)
The above prints True. How long does your computer have to be running before losing picosecond resolution?
tHours = tTooBig / (10^12) / 60 / 60
tHours is 2.501999792983609. My laptop battery lasts longer. Nanosecond precision is lost after 106 days. -- Chris

On Sun, Jan 11, 2009 at 8:28 PM, ChrisK
An Double or Int64 are both 8 bytes and counts with picoseconds precision for 2.5 hours to 106 days. Going to 12 byte integer lets you count to 3.9 billion years (signed). Going to 16 byte integer is over 10^38 years.
Lennart Augustsson wrote:
A double has 53 bits in the mantissa which means that for a running time of about 24 hours you'd still have picoseconds. I doubt anyone cares about picoseconds when the running time is a day.
The above is an unfounded claim about the rest of humanity.
It's not really about humanity, but about physics. The best known clocks have a long term error of about 1e-14. If anyone claims to have made a time measurement where the accuracy exceeds the precision of a double I will just assume that this person is a liar. For counting discrete events, like clock cycles, I want something like Integer or Int64. For measuring physical quantities, like CPU time, I'll settle for Double, because we can't measure any better than this (this can of course become obsolete, but I'll accept that error). -- Lennart

I've found the pico second accuracy useful in working with 'rate equivalent' real time systems. Systems where the individual timings (their jitter) is not critical but the long term rate should be accurate - the extra precision helps with keeping the error accumulation under control. When you are selling something (like data bandwidth) and you are pacing the data stream on a per packet basis you definitely want any error to accumulate slowly - you are in the 10^10 events per day range here. Neil On 12 Jan 2009, at 00:00, Lennart Augustsson wrote:
On Sun, Jan 11, 2009 at 8:28 PM, ChrisK
wrote: An Double or Int64 are both 8 bytes and counts with picoseconds precision for 2.5 hours to 106 days. Going to 12 byte integer lets you count to 3.9 billion years (signed). Going to 16 byte integer is over 10^38 years.
Lennart Augustsson wrote:
A double has 53 bits in the mantissa which means that for a running time of about 24 hours you'd still have picoseconds. I doubt anyone cares about picoseconds when the running time is a day.
The above is an unfounded claim about the rest of humanity.
It's not really about humanity, but about physics. The best known clocks have a long term error of about 1e-14. If anyone claims to have made a time measurement where the accuracy exceeds the precision of a double I will just assume that this person is a liar.
For counting discrete events, like clock cycles, I want something like Integer or Int64. For measuring physical quantities, like CPU time, I'll settle for Double, because we can't measure any better than this (this can of course become obsolete, but I'll accept that error).
-- Lennart _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Aren't Doubles evil? Integer is a nice type, Haskell filosofy compliant. Doubles are not CDoubles, IEEE, infinite precision or anything long term meaninfull. (Warning: non-expert opinion.)
I've found the pico second accuracy useful in working with 'rate equivalent' real time systems. Systems where the individual timings (their jitter) is not critical but the long term rate should be accurate - the extra precision helps with keeping the error accumulation under control.
When you are selling something (like data bandwidth) and you are pacing the data stream on a per packet basis you definitely want any error to accumulate slowly - you are in the 10^10 events per day range here.
Neil
On 12 Jan 2009, at 00:00, Lennart Augustsson wrote:
On Sun, Jan 11, 2009 at 8:28 PM, ChrisK
wrote: An Double or Int64 are both 8 bytes and counts with picoseconds precision for 2.5 hours to 106 days. Going to 12 byte integer lets you count to 3.9 billion years (signed). Going to 16 byte integer is over 10^38 years.
Lennart Augustsson wrote:
A double has 53 bits in the mantissa which means that for a running time of about 24 hours you'd still have picoseconds. I doubt anyone cares about picoseconds when the running time is a day.
The above is an unfounded claim about the rest of humanity.
It's not really about humanity, but about physics. The best known clocks have a long term error of about 1e-14. If anyone claims to have made a time measurement where the accuracy exceeds the precision of a double I will just assume that this person is a liar.
For counting discrete events, like clock cycles, I want something like Integer or Int64. For measuring physical quantities, like CPU time, I'll settle for Double, because we can't measure any better than this (this can of course become obsolete, but I'll accept that error).
-- Lennart _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Neil Davies wrote:
I've found the pico second accuracy useful in working with 'rate equivalent' real time systems. Systems where the individual timings (their jitter) is not critical but the long term rate should be accurate - the extra precision helps with keeping the error accumulation under control.
When you are selling something (like data bandwidth) and you are pacing the data stream on a per packet basis you definitely want any error to accumulate slowly - you are in the 10^10 events per day range here.
Neil
Now I am posting just because I like to look at the time scales. A rate of 10^10 per Day is a period of 8.64 microseconds. If you want to slip only 1 period per year then you need a fractional accuracy of 2.74 * 10^-13. In one day this is a slip of 23.7 nanoseconds. So atomic time radio synchronization is too inaccurate. I have seen GPS receivers that claim to keep the absolute time to within 100 nanoseconds. Lennart is right that 1 picosecond accuracy is absurd compared to all the jitters and drifts in anything but an actual atomic clock in your room. But since CPUs tick faster than nanosecond the CPUTime needs better than 1 nanosecond granularity. I agree with Lennart — I also want an Integral type; it keeps the granularity constant and avoids all the pitfalls of doing math with a Double. Out of simplicity I can see why the granularity was set to 1 picosecond as it is slightly easier to specify than 100 picosecond or 10 picosecond or 1/60 nanosecond (hmmm... arcnanosecond?). Maybe Haskell should name the "1/60 nanosecond" unit something clever and create a new Time submodule using it for April 1st. [ Base 60 is the real standard: http://en.wikipedia.org/wiki/Babylonian_mathematics has an 1800 B.C. tablet with the (sqrt 2) in base 60 as (1).(24)(51)(10) ] -- Chris

On Mon, 12 Jan 2009, ChrisK wrote:
Lennart is right that 1 picosecond accuracy is absurd compared to all the jitters and drifts in anything but an actual atomic clock in your room. But since CPUs tick faster than nanosecond the CPUTime needs better than 1 nanosecond granularity. I agree with Lennart — I also want an Integral type; it keeps the granularity constant and avoids all the pitfalls of doing math with a Double. Out of simplicity I can see why the granularity was set to 1 picosecond as it is slightly easier to specify than 100 picosecond or 10 picosecond or 1/60 nanosecond (hmmm... arcnanosecond?).
The FreeBSD kernel uses a 64+64 bit fixed point type to represent time,
where the integer part is a normal Unix time_t. The fractional part is
64 bits wide in order to be able to represent multi-GHz frequencies
precisely.
http://phk.freebsd.dk/pubs/timecounter.pdf
Tony.
--
f.anthony.n.finch

Tony Finch wrote:
The FreeBSD kernel uses a 64+64 bit fixed point type to represent time, where the integer part is a normal Unix time_t. The fractional part is 64 bits wide in order to be able to represent multi-GHz frequencies precisely.
"multi-GHz" being a euphemism for 18.45*10^9 GHz, over 18 billion GHz. I just read through that. The granularity is 2^-64 seconds, or 5.4*^-20 seconds? That is 54 nano-pico-seconds. I can see needing better than nanosecond, and going to milli-nanoseconds like Haskell, but to jump close to pico-nano-seconds? That skips right past micro-nano-seconds and nano-nano-seconds. That's 20 million times more resolution than Haskell's picoseconds. My that was fun to write. It looks like an excellent performance hack for OS kernels. 64-bits make for simple register and cache access, the compiled code is small and quick, etc. As a portable API it is far too complicated to use. Not in the least because only FreeBSD probably has that API. Note that at 10^-20 seconds the general relativistic shift due to altitude will matter over less than the thickness of a closed laptop. Defining "now" that accurately has meaning localized to less then your computer's size. The warranty for the bottom of your screen will expire sooner than that of the top. Only stock traders and relativistic particles care about time intervals that short. "FreeBSD — designed for the interstellar craft to tomorrow" Hmm...The W and Z bosons decay the fastest with 10^-25 second lifetimes, the shortest known lifetimes that I can find. The fundamental Planck scale, the shortest amount of time in today's physics, is 5.4*10^-44 seconds. So with 80 more bits FreeBSD would be at the fundamental limit. Of course the conversion then depends on the values of h, c, and G. Now that would also be a good April Fool's joke proposal. -- Chris

I just tried getCPUTime on Windows and it seems to tick really slow, about 10 times per second or so. Actually it changes every 15600100000 picoseconds, so about 15600 microseconds, which is indeed the interval at which Windows updates its "tick" count. So anyway a lot of room to go to the picosecond resolution :) But, is this intended behavior? How does it perform on Linux? Should it behave the same on all platforms?
participants (8)
-
ChrisK
-
Duncan Coutts
-
Lennart Augustsson
-
Manlio Perillo
-
Mauricio
-
Neil Davies
-
Peter Verswyvelen
-
Tony Finch