
From: Simon Marlow [mailto:simonmar@microsoft.com]
I like the idea of having a single notion of absolute time, which is independent of TAI or UTC time. You can do arithmetic on absolute time (add/subtract absolute units of time, find absolute time differences), and convert to/from TAI and UTC.
This is something I don't see any motivation for, from a user's point of view. AFAICT, there _is_ no notion of absolute time, just time w.r.t. a given calendar (if you view TAI and UTC as calendars). The representation of time as "units since epoch" is "just an implementation detail" :-) Seriously, though, what use is being able to find the difference between two times like this? If I want the difference between two times, I want to specify in what units the difference should be e.g. millisecs, seconds, days, years, etc. The difference between times tA and tB might be 86400 seconds, but they might still be on the same day e.g. leap seconds, so if I ask for (diffDays tA tB) I might get zero, one, or two, depending on the date. Alistair. ----------------------------------------- ***************************************************************** Confidentiality Note: The information contained in this message, and any attachments, may contain confidential and/or privileged material. It is intended solely for the person(s) or entity to which it is addressed. Any review, retransmission, dissemination, or taking of any action in reliance upon this information by persons or entities other than the intended recipient(s) is prohibited. If you received this in error, please contact the sender and delete the material from any computer. *****************************************************************

Seriously, though, what use is being able to find the difference between two times like this? If I want the difference between two times, I want to specify in what units the difference should be e.g. millisecs, seconds, days, years, etc. The difference between times tA and tB might be 86400 seconds, but they might still be on the same day e.g. leap seconds, so if I ask for (diffDays tA tB) I might get zero, one, or two, depending on the date.
Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight. This is why you want to be able to take the difference in abslute time. In which case if you wanted an absolute number of days, you would take an average or standardised day length. (Days are not really a satisfactory unit, perhaps kilo-seconds?) You also might want to get a real difference, in which case you convert the start and end times to localtime, and then take the difference. Localtime is no use for measuring intervals as some days are longer than others... (so to know exactly how long something took you would have to know how many leap-seconds fell within the time interval) Keean.
Alistair.
----------------------------------------- ***************************************************************** Confidentiality Note: The information contained in this message, and any attachments, may contain confidential and/or privileged material. It is intended solely for the person(s) or entity to which it is addressed. Any review, retransmission, dissemination, or taking of any action in reliance upon this information by persons or entities other than the intended recipient(s) is prohibited. If you received this in error, please contact the sender and delete the material from any computer. *****************************************************************
_______________________________________________ Libraries mailing list Libraries@haskell.org http://www.haskell.org/mailman/listinfo/libraries

Keean Schupke
Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight.
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)? If you assume that gettimeofday() returns UTC and you convert it to TAI using a leap second table, more often than not you would actually introduce a jump near a leap second - compensating for the extra second in UTC which will not be observed in practice. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Marcin 'Qrczak' Kowalczyk wrote:
Keean Schupke
writes: Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight.
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)?
The computers timer should not attempt to adjust for leap seconds... Consider it a chronograph. Keean.

Keean Schupke writes:
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second [...].
The computers timer should not attempt to adjust for leap seconds...
I seem to recall reading somewhere that leap second inserts are supposed to be represented as: 23:59:59 23:59:60 00:00:00 Grated, that doesn't really help with calculating durations accurately. ;-) Peter

Peter Simons
The computers timer should not attempt to adjust for leap seconds...
I seem to recall reading somewhere that leap second inserts are supposed to be represented as:
23:59:59 23:59:60 00:00:00
Yes, but Unix API doesn't use this. The OS communicates in terms of UTC seconds since the epoch (and microseconds / nanoseconds). Userspace translates between this and broken-down time (year, month etc.), and UTC doesn't have leap seconds. Userspace also takes timezones into account - the kernel reports UTC only. I don't know how it looks on other systems. On Linux it is possible to set timezones which assume that the system time is TAI instead of UTC, and translate it to UTC. In this case the local time can have tm_sec == 60. It breaks programs which rely on POSIX rules and perform the conversion themselves. I don't know how NTP clients handle this, and how many people actually use it in practice (it's not a default). -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Keean Schupke
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)?
The computers timer should not attempt to adjust for leap seconds...
But it does. Eventually, or sometimes immediately (I don't know how fast NTP clients react, or whether they are programmed for faster adjustment around a leap second). All syscalls give us an approximation of UTC or a time derived from UTC. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Marcin 'Qrczak' Kowalczyk wrote:
But it does.
Eventually, or sometimes immediately (I don't know how fast NTP clients react, or whether they are programmed for faster adjustment around a leap second).
All syscalls give us an approximation of UTC or a time derived from UTC.
If you don't run ntp (or you machine is not attached to the network) the timer will count milliseconds without adjustment for leap-seconds. When the machine is switched off, the hardware clock likewise does not account for leap-seconds. When you switch on time is copied from the hardware clock. The system timer (which counts time since switch on) will not be adjusted for leap-seconds surely, as its perpose is to measure a time interval. I guess I don't really know which timers NTP updates... but as some systems may not run NTP it seems the system time cannot be assumed to have compensated for leap-seconds. This seems worse than having things one way or the other... Keean

Keean Schupke
If you don't run ntp (or you machine is not attached to the network) the timer will count milliseconds without adjustment for leap-seconds. When the machine is switched off, the hardware clock likewise does not account for leap-seconds. When you switch on time is copied from the hardware clock. The system timer (which counts time since switch on) will not be adjusted for leap-seconds surely, as its perpose is to measure a time interval.
It doesn't imply that it runs in TAI. I would guess it usually runs UTC using the previous leap second count (assuming it was accurate to a second at all), until someone sets it again. Well, I think PC clocks are not very precise at all if they are not synchronized with external sources. Linux NTP client has the feature of measuring and then adjusting the relative speed of the system clock for a reason. If some people run their system clocks in TAI, how should a Haskell system detect whether it's TAI or UTC? -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

On 2005-01-26, Marcin 'Qrczak' Kowalczyk
Keean Schupke
writes: Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight.
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)?
There are patches around that let various systems communicate using NTP and UTC, but keep time internally as TAI, and ignore POSIX. We can't preserve nice properties when people reset things manually. So I don't consider that an error. -- Aaron Denney -><-

Marcin 'Qrczak' Kowalczyk
Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight.
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)?
So, the choice is either UTC, which corresponds to what most Unix (and other?) computers provide, but will generally be inaccurate in the presence of leap seconds. Or we can use TAI, which will in most cases have to be converted from UTC, and thus inherit all its inaccuracies in spite of its theoretically nice properties. At least with TAI it seems like if it would be possible to configure a system to do the right thing, so I would still vote for that (and as for Posix compatibility, well, it sucks in so many other ways anyway :-) (GPS doesn't use leap seconds either, btw) -kzm -- If I haven't seen further, it is by standing in the footprints of giants

I wrote:
(GPS doesn't use leap seconds either, btw)
And according to this URL: http://www.angrycoder.com/article.aspx?cid=5&y=2004&m=5&d=13 .NET's DateTime is derived from TAI (i.e. it doesn't include leap seconds). -kzm -- If I haven't seen further, it is by standing in the footprints of giants

On Thu, Jan 27, 2005 at 09:00:33AM +0100, Ketil Malde wrote:
Marcin 'Qrczak' Kowalczyk
writes: Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight.
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)?
So, the choice is either UTC, which corresponds to what most Unix (and other?) computers provide, but will generally be inaccurate in the presence of leap seconds. Or we can use TAI, which will in most cases have to be converted from UTC, and thus inherit all its inaccuracies in spite of its theoretically nice properties.
At least with TAI it seems like if it would be possible to configure a system to do the right thing, so I would still vote for that (and as for Posix compatibility, well, it sucks in so many other ways anyway :-)
It doesn't actually matter that much if we convert from UTC; presumably we have up-to-date leap second tables for the times in the past. (or mostly up to date) This means we can perfectly translate between the UTC clock in your computer and TAI to get the current TAI time. (and back and forth). there are only issues when using future UTC dates, but that is inherently problematic and at least we have TAI as an option which puts us ahead of a lot of other languages :) John -- John Meacham - ⑆repetae.net⑆john⑈

Bayley, Alistair writes:
AFAICT, there _is_ no notion of absolute time, just time w.r.t. a given calendar (if you view TAI and UTC as calendars).
Well, there _is_ an absolute notion of time, that is what TAI is. The reason why that seems to be of little use for the general public is that this absolute time scale just doesn't correspond to calendar time. There simply is no accurate mapping between TAI and the information 2043-04-01T00:00:00. Technically, you shouldn't be dealing with anything except TAI, because that's the only reliable time system we have. If you do, however, you'll greatly disappoint people expectations every now and then. ;-) A nice introduction into the wonders of designing a time library is this short text: http://boost.org/doc/html/date_time/details.html#date_time.tradeoffs I think that link hasn't been mentioned before, so I figured I should throw it into the mix. Peter
participants (7)
-
Aaron Denney
-
Bayley, Alistair
-
John Meacham
-
Keean Schupke
-
Ketil Malde
-
Marcin 'Qrczak' Kowalczyk
-
Peter Simons