Seth Kurtzberg <seth@cql.com> writes:
Also, what happens when you are using NTP? NTP might just correct
it, but it would screw up the calculations NTP uses and it could
start oscillating.
The NTP client (at least on Linux) adjusts the time by making a jump
only if the time is very inaccurate. If the time only drifted a bit,
it temporarily adjusts the speed of the system clock instead.
I'm also saying that I don't like the idea of using the same
resolution w.r.t. the system clock, because it suggests that the
time is known to a greater precision than is actually the case.
I don't think this would be a practical problem.
To me all this shows that the system clock needs to be handled as
a special case, not just converted into the high resolution
representation
Using a different representation of time just because somebody might
not be aware that the resolution of the system clock is not as good as
the representation suggests? No, this is an unnecessary complication.
I'm not, I hope, being pedantic here and talking about an irrelevant
issue. But if you look in any first year engineering textbook (for an
engineering discipline other than CS), you will find that the display
of a value with a precision greater than the known precision of the
inputs is a cardinal sin. It's the wrong answer. The roundoff error
behavior is clearly going to be different.
the timeout of select(), poll() and epoll() is accurate only to the
timer interrupt (10ms on older kernels and 1ms on newer ones), yet the
gettimeofday is more precise (it returns microseconds, quite accurately;
the call itself takes 2us on my system). So even if gettimeofday is
accurate, sleeping for some time might be much less accurate.
It definitely would be less accurate. But I don't see why that is
relevant to the discussion. Sleep generally causes a context switch,
so the granularity is much higher. Whether poll causes a context
switch is implementation dependent, and also dependent on the specified
timeout. A smart machine might use a spin lock for a very low poll
timeout value. At the end of the day, there is a a minimum granularity
for any machine, and this minimum granularity is always going to be
higher than a clock, most frequently the system clock (or, if you will,
the virtual system clock, as divider circuits that slow the clock value
presented to different components, e.g., the bus vs. the memory vs. the
processor).
BTW, this means that the most accurate way to make a delay is to use
select/poll/epoll to sleep for some time below the given time (i.e.
shorter by the typical largest interval by which the system makes the
delay longer than requested), then to call gettimeofday in a loop
until the given time arrives. The implementation of my language does
that under the hood.
Agreed, but again, the question isn't which is the best way to do it,
the question is what is the granularity of the best way to do it.