
On Thu, Jan 27, 2005 at 09:00:33AM +0100, Ketil Malde wrote:
Marcin 'Qrczak' Kowalczyk
writes: Yes timing between point in time should be independant of leap seconds, so if a program, takes 5 seconds to run it should always take 5 seconds, even if it runs accross midnight.
How would you implement it, given that gettimeofday() and other Unix calls which return the current time either gradually slow down near a leap second (if NTP is used to synchronize clocks) or the clock is not adjusted at all, and at the next time it is set it will just be on average one second later (if it is being set manually)?
So, the choice is either UTC, which corresponds to what most Unix (and other?) computers provide, but will generally be inaccurate in the presence of leap seconds. Or we can use TAI, which will in most cases have to be converted from UTC, and thus inherit all its inaccuracies in spite of its theoretically nice properties.
At least with TAI it seems like if it would be possible to configure a system to do the right thing, so I would still vote for that (and as for Posix compatibility, well, it sucks in so many other ways anyway :-)
It doesn't actually matter that much if we convert from UTC; presumably we have up-to-date leap second tables for the times in the past. (or mostly up to date) This means we can perfectly translate between the UTC clock in your computer and TAI to get the current TAI time. (and back and forth). there are only issues when using future UTC dates, but that is inherently problematic and at least we have TAI as an option which puts us ahead of a lot of other languages :) John -- John Meacham - ⑆repetae.net⑆john⑈