
John Meacham writes:
We should define the ClockTime to be in terms of TAI to the best of the systems ability. at worst, we subtract 20 seconds from posix time, this would be infinitly better than not knowing whether ClockTimes are TAI or posix and whether we can safely subtract them or do anything interesting with them.
Ok, but I don't understand why subtracting 20 seconds from POSIX time would be any better - that doesn't fix the fact that the scale is missing leap seconds, does it? You still can't do arithmetic on it and get meaningful results. I think I understand the problem. It is this: Many systems run their clocks on POSIX time_t. We could reconstruct TAI time from time_t by adding leap seconds, except around the actual time of a leap second. Around the time of a leap second, the NTP daemon will be adjusting the system clock backwards, and we won't be able to tell what the correct time is. Furthermore, we don't know whether the system is running on POSIX time_t, or "correct" time_t. libtai assumes a "correct" time_t. (this in itself is evidence that we should too). On a system with glibc, you can set your system time to "correct" time_t, by setting TZ to something like "right/GMT" (try it), and you also have to configure ntpd to do the right thing (I've no idea how to do this). On a system configured like this, our ClockTime will automatically be TAI-based, there's no need for leap-second tables. I suggest this is the way we should go - users who really need a correct ClockTime should configure their systems appropriately. So, we could either (a) Try to reconstruct TAI time from the system time, and accept that it will be wrong around the time of a leap second. We'll need a way to tell the library whether the system time is POSIX or not. Perhaps: systemTimeIncludesLeapSeconds :: Bool -> IO () (b) Accept whatever the system tells us. On a correctly-configured system, you get a TAI-based ClockTime, otherwise you get a POSIX-based ClockTime. (c) Or we can ditch the whole idea. I vote for (b). (I'll leave the other points in John's message until we've sorted this out, but they won't be lost). Cheers, Simon

On Fri, Aug 01, 2003 at 12:32:30PM +0100, Simon Marlow wrote:
John Meacham writes:
We should define the ClockTime to be in terms of TAI to the best of the systems ability. at worst, we subtract 20 seconds from posix time, this would be infinitly better than not knowing whether ClockTimes are TAI or posix and whether we can safely subtract them or do anything interesting with them.
Ok, but I don't understand why subtracting 20 seconds from POSIX time would be any better - that doesn't fix the fact that the scale is missing leap seconds, does it? You still can't do arithmetic on it and get meaningful results.
the point is not that subtracting 20 gives you the ability to do math, but that POSIX seconds are not equal to real seconds. a TAI second is equivalant to a standard second, A posix second is slightly longer than a normal second. The fact that the proposal specifices that ClockTime's are in picoseconds AND that the clocktime may be a POSIX timestamp are in direct violation with each other. Posix time_t's should not be thought of as an 'approximation' of real time or tai time, they have a well defined meaning and can accuratly represent specific times with slightly less than a second of accuracy. Treating them as representing standard seconds however is clearly incorrect, as they do not. they represent a number that can be trivially converted to an exact UTC time.
I think I understand the problem. It is this:
Many systems run their clocks on POSIX time_t. We could reconstruct TAI time from time_t by adding leap seconds, except around the actual time of a leap second. Around the time of a leap second, the NTP daemon will be adjusting the system clock backwards, and we won't be able to tell what the correct time is.
Furthermore, we don't know whether the system is running on POSIX time_t, or "correct" time_t.
here is the important bit: the Posix time_t IS the correct time_t. changing the meaning of time_t would break many things. filesystem timestamps would be wrong. any network protocols which require monotonically increasing time would mess up. systems which have a different notion of time_t are NOT POSIX. what if we port to some other operating system which has its own notion of time such as DOS, where each increment of it counter is equivalant to slightly over 2 seconds, now, do we let ClockTime be defined in terms of that? do we convert that to posix time? to tai time? what about the definition of clocktime being in terms of standard picoseconds? there is no way to reconcile these.
libtai assumes a "correct" time_t. (this in itself is evidence that we should too).
djb's page is somewhat misleading. he describes the way 'it should have been done'. which is great, but only works if every computer everywhere is changed at once. Posix has since come up with a better solution which maintains backwards compatability. posix timestamps now represent exact UTC times. since UTC is well defined, converting a timestamp to some other format is easy and possible. this has the advantage of people that want to work with TAI or whatnot have a well defined way to translate to/from posix times, and all old posix timestamps remain valid and accurate. the tradeoff was that posix ticks are no longer equal to seconds. if you think about it, there would be no way to keep both the meaning of old timestamps and a uniform second posix chose the lesser of two evils and defined it precisely in such a way that we could get better actual values to work with. like TAI. TAI is clearly the correct choice for a computer interface. the only reason POSIX didn't choose it for time_t is that there was legacy code and systems to deal with. this is the ONLY reason. since we are defining a new interface for a new language there is no reason for us to feel bound by the same arbitrary constraint. tai is no more tricky or difficult to implement than posix time, both systems need to be informed of leap seconds. with posix system, this information comes as explicit adjustments to the system clock.
On a system with glibc, you can set your system time to "correct" time_t, by setting TZ to something like "right/GMT" (try it), and you also have to configure ntpd to do the right thing (I've no idea how to do this). On a system configured like this, our ClockTime will automatically be TAI-based, there's no need for leap-second tables. I suggest this is the way we should go - users who really need a correct ClockTime should configure their systems appropriately.
(a) Try to reconstruct TAI time from the system time, and accept that it will be wrong around the time of a leap second. We'll need a way to tell the library whether the system time is POSIX or not. Perhaps:
systemTimeIncludesLeapSeconds :: Bool -> IO () hmm? no need for all this complication, if the system is POSIX then we assume a posix time_t. for 'trouble' systems (non posix, but internally well behaved) we can always fall back to calling gmtime(3) and snarfing
this is really the wrong thing to do. this would break anything that depends on posix time_t's. this is a misfeature of glibc from the days when it was unclear how the actual definition of time_t will turn out. but now it is well defined. the output. actually, a single call to gmtime on a value of 0 will tell you whether the system is posix or TAI since their epochs differ.
(b) Accept whatever the system tells us. On a correctly-configured system, you get a TAI-based ClockTime, otherwise you get a POSIX-based ClockTime.
A correctly configured system will return a POSIX time_t if it is POSIX. we can always assume the output of gmtime(2) is correct and convert to a TAI time. if we accept whatever the system gives us, and can't assume anything about it's meaning, than what is the point of having a ClockTime anyway? the user can do nothing interesting with it other than convert it to a CalendarTime and, let us not forget, we have to be able to convert ClockTimes to CalendarTimes anyway, at which point we will need to know what ClockTimes are anyway. so to sum up: Posix Ticks (and possiby any systems internal clock) are not equal to standard seconds, this directly contradicts our choice that clocktimes should have a unit of picosecond. the only reason the situation is confused in C land is for backwards compatability with old timestamps. no need to inherit that confusion. declaring that ClockTimes may be either posix or tai is dangerous, they are similar but represent very different things, posix time_t's are timestamps, meaning they represent a specific moment in time and are only well defined for the halfspace later than epoch. TAI is a representation of time offsets (and hence can represent time stamps, from a known reference point epoch). if we want to be able to represent time differences then we need something of the later variety. and well.. it is bad to have something that almost always works a certain way but is not actually defined to... every time you subtract two ClockTimes you have to worry about that one person out there who your program will break for... if we define ClockTimes as TAI, you no longer have to worry about that person, because his system is buggy and your code is correct since it follows the standard :) John -- --------------------------------------------------------------------------- John Meacham - California Institute of Technology, Alum. - john@foo.net ---------------------------------------------------------------------------
participants (2)
-
John Meacham
-
Simon Marlow