
On 27 January 2005 17:06, Daan Leijen wrote:
Marcin 'Qrczak' Kowalczyk wrote:
You can't have gettimeofday() returning UTC and libtai returning TAI at the same time, because they return the same thing. This is the implementation from libtai:
[snip]
DJB (the author of libtai) disagrees with POSIX about what gettimeofday should return, and assumes that it actually returns what he wishes it returned.
Wow, that is terrible! Well, we can not fix libraries. If libtai is that broken, we can just as well do it ourselves:
That's pretty much the conclusion I came to when I looked at libtai for implementing my library.
if we assume that we can convert a current UTC time to TAI, we can calculate the TAI time at the start of the program and use time_t to keep track of the TAI delta -- here we take advantage of the time_t bug which is not adjusted for leap seconds!
You can't assume that time_t is not being adjusted for leap seconds: the host might be running NTP, for example. The best thing to do seems to be to assume that time_t is a count of seconds since the epoch minus leap seconds, and calculate TAI from that. It might be wrong by up to a second around a leap second on a host running NTP, and slightly more wrong on a host not running NTP, but the latter probably don't care too much about second-accuracy anyway. Cheers, Simon

Simon Marlow wrote:
You can't assume that time_t is not being adjusted for leap seconds: the host might be running NTP, for example. The best thing to do seems to be to assume that time_t is a count of seconds since the epoch minus leap seconds, and calculate TAI from that. It might be wrong by up to a second around a leap second on a host running NTP, and slightly more wrong on a host not running NTP, but the latter probably don't care too much about second-accuracy anyway.
Why not directly use a system timer... When the library initialises we get the time, and then allocate a dedicated system timer, which we use to count time intervals... Keean.

Keean Schupke writes:
Why not directly use a system timer... When the library initialises we get the time, and then allocate a dedicated system timer, which we use to count time intervals...
Where does the library "get the time" from? You've just swept the problem
from one part of the code to another.
--KW 8-)
--
Keith Wansbrough

On 2005-01-27, Simon Marlow
On 27 January 2005 17:06, Daan Leijen wrote:
Marcin 'Qrczak' Kowalczyk wrote:
You can't have gettimeofday() returning UTC and libtai returning TAI at the same time, because they return the same thing. This is the implementation from libtai:
[snip]
DJB (the author of libtai) disagrees with POSIX about what gettimeofday should return, and assumes that it actually returns what he wishes it returned.
More accurately, he expects admins to set it to TAI, and not run programs that set it to other than non-TAI.
Wow, that is terrible! Well, we can not fix libraries. If libtai is that broken, we can just as well do it ourselves:
That's pretty much the conclusion I came to when I looked at libtai for implementing my library.
The thing is that POSIX or not, storing UTC in a time_t is clearly broken. There are people working on systems that store TAI (or TAI + fixed offset) in time_t, with special NTP daemons or patches to the standard ones. I would really like the ability to have Haskell programs do exactly the right thing in such an environment. This can't be autodetected, short of a network probe of a timeserver though, and even that will just be a heuristic. The only way to fix this is to let people explicitly tell the library that they're on such a system. It'd be convenient if there were, say, environment variables defined for this purpose. (Or another convenient system. A file would be fine too, but I think environment variable would involve the least overhead.)
You can't assume that time_t is not being adjusted for leap seconds: the host might be running NTP, for example. The best thing to do seems to be to assume that time_t is a count of seconds since the epoch minus leap seconds, and calculate TAI from that. It might be wrong by up to a second around a leap second on a host running NTP, and slightly more wrong on a host not running NTP, but the latter probably don't care too much about second-accuracy anyway.
Certainly the best default at this time. -- Aaron Denney -><-

Aaron Denney
DJB (the author of libtai) disagrees with POSIX about what gettimeofday should return, and assumes that it actually returns what he wishes it returned.
More accurately, he expects admins to set it to TAI, and not run programs that set it to other than non-TAI.
Does he propose a way for programs to discover whether the system clock is TAI or UTC? Does he provide an NTP client which works when it's TAI? It would still not solve the problem of programs which assume that it's UTC like POSIX says. In particular it breaks gmtime() in the C library, it breaks implementations of Java (no matter whether they use gmtime() or reimplement it themselves) etc.
It'd be convenient if there were, say, environment variables defined for this purpose. (Or another convenient system. A file would be fine too, but I think environment variable would involve the least overhead.)
I agree. A correct long-term solution should allow both conventions to coexist, without changing the behavior of an existing standarized API. I prefer to have a system which slows down for a second when leap seconds are announced, than to have a system where all the time different programs have different ideas about the current time. For example there would be new syscalls for TAI time; a new kernel would treat the hardware clock as TAI and compute UTC from this when a process asks for gettimeofday (which is more accurate than the other direction); glibc would nevertheless translate in the other direction when run on an old kernel, so programs would work at all; and there would be a central place to keep the leap second table, with a special program to notify the kernel about updates (which would propagate to running processes); and NTP clients would be enhanced to use the new API when avaiable. Note that it's realistic that a process runs for over half a year, so restarting each process between a leap second is announced and the time it happens is not enough, and notifying each running program separately would be bad too - it would be a hack, not a solution. I wonder why DJB and other people didn't propose such a solution, and instead they complain how POSIX is broken and propose something which breaks programs which obey POSIX rules. Anyway, all this effort will be wasted if there are no more leap seconds and the system will be changed, e.g. by adjusting by an hour every couple of centuries instead of a second every couple of months... -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

On 2005-01-27, Marcin 'Qrczak' Kowalczyk
Aaron Denney
writes: DJB (the author of libtai) disagrees with POSIX about what gettimeofday should return, and assumes that it actually returns what he wishes it returned.
More accurately, he expects admins to set it to TAI, and not run programs that set it to other than non-TAI.
Does he propose a way for programs to discover whether the system clock is TAI or UTC?
No, he doesn't.
Does he provide an NTP client which works when it's TAI?
Yes. Yes, other programs will then have offset time values, etc.
I wonder why DJB and other people didn't propose such a solution, and instead they complain how POSIX is broken and propose something which breaks programs which obey POSIX rules.
Because POSIX rules are counterintuitive, and usually not correctly implemented. There's a somewhat reasonable folk-belief that the committee just didn't know quite enough about time and expected UTC to have behaviour more like TAI. Getting rid of what they see as intolerably broken behaviour is more important than interoperability. (I agree with most of the rest of what you said.) -- Aaron Denney -><-

In article <87llaeobbu.fsf@qrnik.zagroda>,
Marcin 'Qrczak' Kowalczyk
For example there would be new syscalls for TAI time; a new kernel would treat the hardware clock as TAI and compute UTC from this when a process asks for gettimeofday (which is more accurate than the other direction); glibc would nevertheless translate in the other direction when run on an old kernel, so programs would work at all; and there would be a central place to keep the leap second table, with a special program to notify the kernel about updates (which would propagate to running processes); and NTP clients would be enhanced to use the new API when avaiable.
This turns out not at all hard to do in Linux with a kernel module, since they can intercept system calls and provide new ones very easily. Incidentally, I came across an amusing solution to make gettimeofday always tell correct UTC time, from http://lkml.org/lkml/1998/9/9/50:
Actually, I think Ulrich was present when I proposed a similar solution: gettimeofday() will not return during 23:59:60. If a process calls gettimeofday() during a leap second, then the call will sleep until 0:00:00 when it can return the correct result.
This horrified the real-time people. It is, however, strictly speaking, completely correct.
-- Ashley Yakeley, Seattle WA
participants (6)
-
Aaron Denney
-
Ashley Yakeley
-
Keean Schupke
-
Keith Wansbrough
-
Marcin 'Qrczak' Kowalczyk
-
Simon Marlow