
What underlying numerical type should be used to store TAI and UTC types, and at what resolution? newtype Clocktime = Clocktime ??? deriving (Eq,Ord, ...) Here are some suggestions. Not all of them are good ones: * Integer, where 1 = 1 microsecond, nanosecond, picosecond But which? Current ClockTime is ps. POSIX "struct timeval" is micro-s, "struct timespec" is ns. TAI64NA uses attoseconds (10^-18 s). http://cr.yp.to/libtai/tai64.html + fast(?) + power of ten matches common practice - whatever resolution might not be enough * Integer, where 1 = 1 Planck time (c. 5*10^-44 s) + amusing + unlikely to ever need more accuracy - number of Planck times in an SI second not precisely known - not a power of ten - about 2*10^31 in a picosecond, or about 100 bits * Fixed-size integer type, where 1 = 1 microsecond or etc. 2^64 mus = 580,000 years 2^64 ns = 580 years 2^128 ps = 10^19 years + fast(?) - probably need Int128 * Rational, where 1 = 1 second + 1 = 1 second easy to use + all the resolution you need + no error when dividing, guarantee (a / b) * b == a - might be slower? * Fixed-point type, where 1 = 1 second This could be Integer at some power of ten. + 1 = 1 second easy to use - we'd have to create it - whatever resolution might not be enough * Floating-point type, where 1 = 1 second + 1 = 1 second easy to use + fair amount of resolution + can use sqrt and trig functions easily + fancy NaN values, signed zeros and infinities - resolution variable, bad use of floating point * Type parameter, where 1 = 1 second + flexible - extra complication - can't use directly with Integer except for 1-second resolution -- Ashley Yakeley, Seattle WA

Ashley Yakeley
What underlying numerical type should be used to store TAI and UTC types, and at what resolution?
It would be nice if the interface was such that it didn't matter to the user. Couldn't it be an opaque type? And does the resolution have to be standardized? Standardizing on picoseconds is probably safe, but I think it would be better to be able to query about the underlying clock resolution, instead of getting a long sequence of non-significant digits. -kzm -- If I haven't seen further, it is by standing in the footprints of giants

In article
It would be nice if the interface was such that it didn't matter to the user. Couldn't it be an opaque type? And does the resolution have to be standardized? Standardizing on picoseconds is probably safe, but I think it would be better to be able to query about the underlying clock resolution, instead of getting a long sequence of non-significant digits.
Bear in mind that the time type might be used for time calculations unrelated to the system clock. Whether or not we hide the constructor, I think its internal resolution and general behaviour ought to be known and platform-independent. -- Ashley Yakeley, Seattle WA

Ashley Yakeley wrote:
In article
, Ketil Malde wrote: It would be nice if the interface was such that it didn't matter to the user. Couldn't it be an opaque type? And does the resolution have to be standardized? Standardizing on picoseconds is probably safe, but I think it would be better to be able to query about the underlying clock resolution, instead of getting a long sequence of non-significant digits.
Bear in mind that the time type might be used for time calculations unrelated to the system clock. Whether or not we hide the constructor, I think its internal resolution and general behaviour ought to be known and platform-independent.
I understand what Ashley is saying, but that type of calculation is very different from a calculation related to the system clock. The choices that seem to be on the table here are flawed. To force all time calculations to be no better than the system clock resolution is not sensible, for the reasons that Ashley states. However, it is also not acceptable for a function to return an incorrect value for calculations involving the system clock. If, say, I make two calls to read the current time, and both return the same value, what does that mean? Clearly it doesn't mean that no time has passed. It means that the clock hasn't ticked, but it provides no information about exactly what that means. Does it mean that the elapsed time is less than a second? A millisecond? A microsecond? Without knowledge about the resolution of the system clock, there is no way to find out. I think that this particular pair of birds can't be killed with one stone. The system clock resolution is platform dependent, and this fact must be dealt with in a way that doesn't produce incorrect answers. Time values whose source is the system clock are different not only in resolution but also in kind. As presented, the choice is between having a cripled library that is a slave to the system clock resolution, or having a useful computational capability for time but returning incorrect answers w.r.t. the system clock. Clearly these are two different things. The core of the time calculation can be shared by these two different types of time, but at the user level it needs to be clear whether a value is derived from the system clock, or is not. I don't see any way around the need for a different interface for each. The alternatives are unacceptable.

In article <41FE033F.1080507@cql.com>, Seth Kurtzberg
If, say, I make two calls to read the current time, and both return the same value, what does that mean?
That's an interesting question. For "Unix time" that's not supposed to happen: successive calls must return strictly later times unless the clock has been set back by more than a certain amount. This only really becomes an issue for leap-seconds, when one is trying to set the clock back one second. In such a case, successive clock calls increment one tick until time has caught up. It might be helpful to have the platform-dependent clock resolution available as a value.
Clearly these are two different things.
Well, the system clock is just one of many sources for time values. The user might be dealing with times from some other clock, or measured times from a scientific experiment, or appointment times from a datebook, or phenomenon times from astronomical calculations. How should these all be represented?
The core of the time calculation can be shared by these two different types of time, but at the user level it needs to be clear whether a value is derived from the system clock, or is not. I don't see any way around the need for a different interface for each. The alternatives are unacceptable.
Wouldn't the user already know whether a value is derived from the system clock or not, from the program they write? -- Ashley Yakeley, Seattle WA

Ashley Yakeley wrote:
In article <41FE033F.1080507@cql.com>, Seth Kurtzberg
wrote: If, say, I make two calls to read the current time, and both return the same value, what does that mean?
That's an interesting question. For "Unix time" that's not supposed to happen: successive calls must return strictly later times unless the clock has been set back by more than a certain amount. This only really becomes an issue for leap-seconds, when one is trying to set the clock back one second. In such a case, successive clock calls increment one tick until time has caught up.
It might be helpful to have the platform-dependent clock resolution available as a value.
Doesn't automatically forcing a system clock tick screw up the time? Also, what happens when you are using NTP? NTP might just correct it, but it would screw up the calculations NTP uses and it could start oscillating.
Clearly these are two different things.
Well, the system clock is just one of many sources for time values. The user might be dealing with times from some other clock, or measured times from a scientific experiment, or appointment times from a datebook, or phenomenon times from astronomical calculations. How should these all be represented?
The way you suggested. I'm not saying that there shouldn't be a computation library with better resolution. That's necessary for two reasons; one, the type of applications you just mentioned, and, two, because you _do_ want a clock library independent of the system clock. I'm saying that you _also_ need to handle the system clock case. I'm also saying that I don't like the idea of using the same resolution w.r.t. the system clock, because it suggests that the time is known to a greater precision than is actually the case. I don't think we disagree, in general, it's more a question of whether or not system clock related computations should match the precision of the system clock. 123.45000 implies that the value is known to be accurate to five decimal points (just picking an arbitrary number of digits beyond the decimal point, because I don't recall the actual precision of the high resolution library). Truncating at the end is also not "correct," because the final result in general might be different if you compute with five digits and truncate, rather than computing with two digits throughout. (Again, whatever the number is; I pulled 2 digit out of the air, just to use a number.) To me all this shows that the system clock needs to be handled as a special case, not just converted into the high resolution representation
The core of the time calculation can be shared by these two different types of time, but at the user level it needs to be clear whether a value is derived from the system clock, or is not. I don't see any way around the need for a different interface for each. The alternatives are unacceptable.
Wouldn't the user already know whether a value is derived from the system clock or not, from the program they write?
I see you haven't met some of the programmers who work for me. :) Seriously, yes, they would know, but there are portability concerns. Which, of course, is what you have been saying; I just have a slightly different take on it.

In article <41FE23F1.6080908@cql.com>, Seth Kurtzberg
Doesn't automatically forcing a system clock tick screw up the time?
Well, one nanosecond is three cycles of a 3GHz processor. For the time being it seems unlikely to be a problem.
Also, what happens when you are using NTP? NTP might just correct it, but it would screw up the calculations NTP uses and it could start oscillating.
NTP knows all about this, and it's only an issue with leap seconds. See http://www.eecis.udel.edu/~mills/leap.html
I don't think we disagree, in general, it's more a question of whether or not system clock related computations should match the precision of the system clock. 123.45000 implies that the value is known to be accurate to five decimal points (just picking an arbitrary number of digits beyond the decimal point, because I don't recall the actual precision of the high resolution library). Truncating at the end is also not "correct," because the final result in general might be different if you compute with five digits and truncate, rather than computing with two digits throughout. (Again, whatever the number is; I pulled 2 digit out of the air, just to use a number.)
To me all this shows that the system clock needs to be handled as a special case, not just converted into the high resolution representation
The system clock gives measurements with a particular accuracy attached. This isn't a particularly special case, other time applications may involve measurements or calculations to some particular accuracy. -- Ashley Yakeley, Seattle WA

Seth Kurtzberg
Also, what happens when you are using NTP? NTP might just correct it, but it would screw up the calculations NTP uses and it could start oscillating.
The NTP client (at least on Linux) adjusts the time by making a jump only if the time is very inaccurate. If the time only drifted a bit, it temporarily adjusts the speed of the system clock instead.
I'm also saying that I don't like the idea of using the same resolution w.r.t. the system clock, because it suggests that the time is known to a greater precision than is actually the case.
I don't think this would be a practical problem.
To me all this shows that the system clock needs to be handled as a special case, not just converted into the high resolution representation
Using a different representation of time just because somebody might not be aware that the resolution of the system clock is not as good as the representation suggests? No, this is an unnecessary complication. Anyway, what does the "resolution of the system clock" mean? On Linux the timeout of select(), poll() and epoll() is accurate only to the timer interrupt (10ms on older kernels and 1ms on newer ones), yet the gettimeofday is more precise (it returns microseconds, quite accurately; the call itself takes 2us on my system). So even if gettimeofday is accurate, sleeping for some time might be much less accurate. BTW, this means that the most accurate way to make a delay is to use select/poll/epoll to sleep for some time below the given time (i.e. shorter by the typical largest interval by which the system makes the delay longer than requested), then to call gettimeofday in a loop until the given time arrives. The implementation of my language does that under the hood. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Marcin 'Qrczak' Kowalczyk wrote:
Seth Kurtzberg
writes: Also, what happens when you are using NTP? NTP might just correct it, but it would screw up the calculations NTP uses and it could start oscillating.
The NTP client (at least on Linux) adjusts the time by making a jump only if the time is very inaccurate. If the time only drifted a bit, it temporarily adjusts the speed of the system clock instead.
I'm also saying that I don't like the idea of using the same resolution w.r.t. the system clock, because it suggests that the time is known to a greater precision than is actually the case.
I don't think this would be a practical problem.
To me all this shows that the system clock needs to be handled as a special case, not just converted into the high resolution representation
Using a different representation of time just because somebody might not be aware that the resolution of the system clock is not as good as the representation suggests? No, this is an unnecessary complication.
I'm not, I hope, being pedantic here and talking about an irrelevant issue. But if you look in any first year engineering textbook (for an engineering discipline other than CS), you will find that the display of a value with a precision greater than the known precision of the inputs is a cardinal sin. It's the wrong answer. The roundoff error behavior is clearly going to be different. I can certainly work around the problem at the application code level, but that's a clear violation of the encapsulation principle, because I'm relying on the implementation of the calculations and not just the interface. If everyone disagrees with me, I'll shut up, but I truly believe that this is a serious error.
Anyway, what does the "resolution of the system clock" mean? On Linux
It means, in general, a clock tick. poll, select, etc. cannot provide a timeout of greater precision than the system clock, and in general, since execution of an assembly language instruction takes multiple clock ticks, poll and its family actually can't even reach the precision of a single clock tick. It's important to distinguish between the fact that a method allows you to use a value, and the fact that, in a given environment, all values lower than some minimum (something of the order of 10 clock ticks, which is optimistic) are simply treated as zero. Not in the sense that zero means blocking, in the sense that the interval from the perspective of poll() is actually zero. The next time a context switch (or an interrupt, if poll is implemented using interrupts) the timeout will be exceeded. Checking the documentation of poll, there is even a warning that you cannot rely on any given implementation to provide the granularity that poll allows you to specify. You have a synchronous machine here (down at the processor level) and an instruction cannot be executed before the previous instruction has completed. (There are pipeline processors for which this isn't precisely true, but it is true that there is _some_ amount of time that is the maximum achievable granularity for any machine.)
the timeout of select(), poll() and epoll() is accurate only to the timer interrupt (10ms on older kernels and 1ms on newer ones), yet the gettimeofday is more precise (it returns microseconds, quite accurately; the call itself takes 2us on my system). So even if gettimeofday is accurate, sleeping for some time might be much less accurate.
It definitely would be less accurate. But I don't see why that is relevant to the discussion. Sleep generally causes a context switch, so the granularity is much higher. Whether poll causes a context switch is implementation dependent, and also dependent on the specified timeout. A smart machine might use a spin lock for a very low poll timeout value. At the end of the day, there is a a minimum granularity for any machine, and this minimum granularity is always going to be higher than a clock, most frequently the system clock (or, if you will, the virtual system clock, as divider circuits that slow the clock value presented to different components, e.g., the bus vs. the memory vs. the processor). As I said, I'll drop it if every disagrees with me, but I think it is worth some thought and should be based on the actual behavior of a machine, not on the fact that a method may allow you to specify a lower timeout value. That makes sense, because you wouldn't want to code poll in such a way that it can't take advantage of higher clock speeds. The speed is not infinite, and thus the granularity is not unlimited.
BTW, this means that the most accurate way to make a delay is to use select/poll/epoll to sleep for some time below the given time (i.e. shorter by the typical largest interval by which the system makes the delay longer than requested), then to call gettimeofday in a loop until the given time arrives. The implementation of my language does that under the hood.
Agreed, but again, the question isn't which is the best way to do it, the question is what is the granularity of the best way to do it.

In article <41FE95DF.50000@cql.com>, Seth Kurtzberg
I'm not, I hope, being pedantic here and talking about an irrelevant issue. But if you look in any first year engineering textbook (for an engineering discipline other than CS), you will find that the display of a value with a precision greater than the known precision of the inputs is a cardinal sin. It's the wrong answer. The roundoff error behavior is clearly going to be different.
Well, how do you feel about using Rationals everywhere? That way there's never any question of some irrelevant specified unused accuracy. newtype ClockTime = ClockTime Rational deriving (Eq, etc.) I quite like the idea of using Rational for ClockTime, but I worry that it might be slow. But it does allow you to do things such as dividing times into n pieces and adding them all up again without error. -- Ashley Yakeley, Seattle WA

Ashley Yakeley
Well, how do you feel about using Rationals everywhere?
The numerator and denominator can too easily become huge, e.g. if one is computing absolute times of an event repeating in uneven intervals, without retrieving a rounded value from the system clock each time. He won't easily notice that the numbers grow out of control. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

In article <87acqor67i.fsf@qrnik.zagroda>,
Marcin 'Qrczak' Kowalczyk
The numerator and denominator can too easily become huge, e.g. if one is computing absolute times of an event repeating in uneven intervals, without retrieving a rounded value from the system clock each time. He won't easily notice that the numbers grow out of control.
If I read you correctly, your complaint is that certain calculations would be too accurate, thus leading to large representation. -- Ashley Yakeley, Seattle WA

Ashley Yakeley
The numerator and denominator can too easily become huge, e.g. if one is computing absolute times of an event repeating in uneven intervals, without retrieving a rounded value from the system clock each time. He won't easily notice that the numbers grow out of control.
If I read you correctly, your complaint is that certain calculations would be too accurate, thus leading to large representation.
Sort of. The accuracy is of course illusory, the extra information is meaningless. It only wastes memory and computation time. If a time applies to an event which happens in a computer system, or if a time span is a measured difference between two events, or if a target time or time span is used to make a delay in a thread - there is some smallest resolution below which the bits are noise or zeroes when taken out of the system, and ignored when put into the system. The resolution depends on the method we use for measurement or delay, and on the system load. On a PC it's usually somewhere between 1us and 10ms (and maybe Linux actually has some true nanosecond-precision clocks when asked to use real-time timers, I don't know). I guess that there exists or will exist a hardware capable on running Haskell which has a better precision. It is useful to use a better precision for internal computations than is available on each individual measurement, because it reduces the effect of cumulated round-off errors. It's not useful though to have infinitely better precision: it makes no sense if computing the delay on ratios of bignums takes longer than the delay itself... -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Marcin 'Qrczak' Kowalczyk writes:
Ashley Yakeley
writes:
If I read you correctly, your complaint is that certain calculations would be too accurate, thus leading to large representation.
The accuracy is of course illusory, the extra information is meaningless.
Not necessarily. You might very well be using Haskell (and its new time library) to perform calculations with data that comes from a source which _does_ have this kind of accuracy. Peter

On 2005-01-31, Seth Kurtzberg
I'm not, I hope, being pedantic here and talking about an irrelevant issue. But if you look in any first year engineering textbook (for an engineering discipline other than CS), you will find that the display of a value with a precision greater than the known precision of the inputs is a cardinal sin. It's the wrong answer. The roundoff error behavior is clearly going to be different.
I'm afraid you are. I think we're all aware of the problem of ascribing to much accuracy to a result quoted to high precision. But truncating the precision to known level of accuracy is merely a rule-of-thumb that usually works well enough. If you truely need to know error bounds, there's no escape but to manually propogate that error information through. Setting a maximum precision because some sources for that information aren't that accurate is throwing away the baby with the bathwater, and if you have imprecise sources from outside the system, you have the same problemi in managing their precision, if it's less than what we demanded for the ClockTime precision.
I can certainly work around the problem at the application code level, but that's a clear violation of the encapsulation principle, because I'm relying on the implementation of the calculations and not just the interface.
In principle this is true. In practice this doesn't matter as the time library will not be adding up many many small increments or similar on its own -- these will be passed through the user level. Even if it did, truncating the precision to which something is represented wouldn't fix the misbehaviour, it would just hide it by making it consistently wrong.
If everyone disagrees with me, I'll shut up, but I truly believe that this is a serious error.
Ignoring it would be a serious error, but this solution doesn't work well. The best we can do really is expose the clock resolution, either in a seperate call, or bundling it up with every call to get the current time. But we don't really have a good way to get that. sysconf(_SC_CLK_TCK) might be a reasonable heuristic, but Martin gave an example where it isn't. If we were to use the posix timer CLOCK_REALTIME with clock_gettime(), then clock_getres() should would give us the precision, but not, of course, the accuracy. On the topic of resolution, I'd recommend nanosecond precision, just because the posix timers use that as the base (struct combining time_t seconds, and long nanoseconds). -- Aaron Denney -><-

Aaron Denney wrote:
[...] If we were to use the posix timer CLOCK_REALTIME with clock_gettime(), then clock_getres() should would give us the precision, but not, of course, the accuracy.
I agree with everything you wrote, and I think it's worth making a big deal of the fact that precision and accuracy are different things, because it's easy to confuse them in discussions like this. Every time-related system call specifies a precision, but I don't think any of them say anything about accuracy. Furthermore, on nearly every system the differential accuracy of a system clock will be orders of magnitude better than its absolute accuracy, so it's ambiguous to talk about accuracy without specifying which one you mean. Yet another distinct concept is that of a time interval like "1 February 2005 UTC", which has infinite precision and accuracy even though it's fuzzy in a different way. It doesn't make sense to represent the day with picosecond precision because then you're effectively talking about a particular picosecond in that day, which is a totally different concept. Possibly we could get away with talking always about intervals of time (rather than points), but allowing intervals of different sizes. E.g. (trying to retain static typing here): newtype AbsPicoseconds = AbsPicoseconds Integer newtype RelPicoseconds = RelPicoseconds Integer newtype AbsSeconds = AbsSeconds Integer newtype RelSeconds = RelSeconds Integer newtype AbsDays = AbsDays Integer newtype RelDays = RelDays Integer Conversions from AbsPicoseconds to AbsSeconds to AbsDays may make sense, but the reverse don't. And no conversions on Rel times make sense. These data types don't carry accuracy information around with them, because I think that's hopeless. -- Ben

Aaron Denney wrote:
I'm afraid you are. I think we're all aware of the problem of ascribing to much accuracy to a result quoted to high precision. But truncating the precision to known level of accuracy is merely a rule-of-thumb that usually works well enough. If you truely need to know error bounds, there's no escape but to manually propogate that error information through. Setting a maximum precision because some sources for that information aren't that accurate is throwing away the baby with the bathwater,
Yes, but I didn't suggest any such thing
and if you have imprecise sources from outside the system, you have the same problemi in managing their precision, if it's less than what we demanded for the ClockTime precision.
Yes, but as I'm suggesting that the precision be controllable by the user (if this is possible without mucking up the default case). Clearly you can't expect the time library to know the precision of every value is uses. In your example (outside sources), only the library's user knows the precision. So let the user control it.

Seth Kurtzberg
But if you look in any first year engineering textbook (for an engineering discipline other than CS), you will find that the display of a value with a precision greater than the known precision of the inputs is a cardinal sin.
The time expressed as an absolute number of ticks is primarily used for computation, not for display. For display you usually choose the format explicitly, it rarely includes more precision than seconds, and if it does, you are aware how many digits you want to show.
I can certainly work around the problem at the application code level, but that's a clear violation of the encapsulation principle, because I'm relying on the implementation of the calculations and not just the interface.
When I program timers, usually I don't even care what is the exact resolution of the timer, as long as it's good enough for users to not notice that delays are inaccurate by a few milliseconds. The OS can preempt my process anyway, the GC may kick in, etc. If someone makes a real-time version of Haskell which runs on a real-time OS, it's still not necessary to change the resolution of the interface - it's enough for the program to know what accuracy it should expect.
Anyway, what does the "resolution of the system clock" mean? On Linux
It means, in general, a clock tick. poll, select, etc. cannot provide a timeout of greater precision than the system clock, and in general, since execution of an assembly language instruction takes multiple clock ticks, poll and its family actually can't even reach the precision of a single clock tick.
Ah, so you mean the processor clock, not the timer interrupt. What has the processor clock to do with getting the current time and setting up delays? Anyway, I don't propose picoseconds nor attoseconds. Some numbers from my PC: - my processor's tick has 0.8ns - gettimeofday interface has a resolution of 1us - clock_gettime interface uses 1ns, but the actual time is always a multiple of 1us - a gettimeofday call takes 2us to complete - select interface uses 1us, but the actual delay is accurate to 1ms - poll allows to sleep for delays accurate to 1ms, but it must be at least 1ms-2ms (two timer interrupts) - epoll allows to sleep for delays accurate to 1ms - if the same compiled program is run on an older kernel, select/poll precision is 10 times worse So I have two proposals for the resolution of Haskell's representation of absolute time: 1. Use nanoseconds. 2. Use an implementation-dependent unit (will be probably nanoseconds or microseconds with current implementations, but the interface will not have to be changed if more accurate delays become practical in 10 years).
It's important to distinguish between the fact that a method allows you to use a value, and the fact that, in a given environment, all values lower than some minimum (something of the order of 10 clock ticks, which is optimistic) are simply treated as zero. Not in the sense that zero means blocking, in the sense that the interval from the perspective of poll() is actually zero. The next time a context switch (or an interrupt, if poll is implemented using interrupts) the timeout will be exceeded. Checking the documentation of poll, there is even a warning that you cannot rely on any given implementation to provide the granularity that poll allows you to specify.
Note that the behavior of poll and epoll on Linux differs, even though they use the same interface for expressing the delay (number of milliseconds as a C int). poll rounds the time up to timer interrupts (usually 1ms or 10ms), and sleeps for the resulting time or up to one tick *longer*. epoll rounds the time up to timer interrupts, and sleeps for the resulting time or up to one tick *shorter* (or sometimes longer if the process is preempted). The behavior of poll is consistent with POSIX, which tells that the specified time is the minimum delay. The behavior of epoll allows to sleep for the next timer interrupt by specifying 1ms (poll always sleeps at least one full timer interrupt - I mean when it returns because the delay has expired). I've heard that by accident select is like epoll, not like poll (except that the interface specifies microseconds; it's not more accurate though), but I haven't checked. So the compiler of my language measures (at ./configure time) the time poll/epoll will usually sleep when asked to sleep for 1ms just after a timer interrupt. This is used to calculate the delay to ask poll/epoll for. The remaining time is slept using a loop which calls gettimeofday, unless another thread wants to run. This gives a practical accuracy of about 20us here. But this becomes 1ms when other threads or processes interfere. This means that the "resolution of a delay" is not a well defined concept. It depends on too many variables, for example on the time it takes for gettimeofday call to return and on activity of threads and processes. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

In article <87fz0gttd2.fsf@qrnik.zagroda>,
Marcin 'Qrczak' Kowalczyk
So I have two proposals for the resolution of Haskell's representation of absolute time:
1. Use nanoseconds.
2. Use an implementation-dependent unit (will be probably nanoseconds or microseconds with current implementations, but the interface will not have to be changed if more accurate delays become practical in 10 years).
Of course the clock doesn't actually return absolute time per se if it's set to UTC. It looks like we may need three types here: * one for TAI * one for POSIX time, which is a broken encoding of UTC * one for correct encoding of UTC Can we assume that the system clock will be set to UTC? This is apparently in the POSIX standard. What about on Windows platforms? If so, the system clock should return POSIX time. Do we want the same resolution in all three? Should it be fixed or platform-dependent? Bear in mind that there are applications for this that may not involve the system clock at all. -- Ashley Yakeley, Seattle WA

Ashley Yakeley wrote: [snip]
Can we assume that the system clock will be set to UTC?
No, you can't assume it. But I believe you can assume that the calls to obtain the time are smart enough to convert it to UTC if necessary.
This is apparently in the POSIX standard. What about on Windows platforms? If so, the system clock should return POSIX time.
I would want to verify that windows behaves correctly, rather than assume it. Microsoft is not a big POSIX fan.
Do we want the same resolution in all three? Should it be fixed or platform-dependent? Bear in mind that there are applications for this that may not involve the system clock at all.
Since clearly the library cannot _allways_ know the precision, one can make a good case that the responsibility for determining the precision is the library user, not the library itself. And/or, you can isolate the platform-dependent aspects and allow the user to set them, or have the library try to determine the correct values and allow the user to override the values. Which implies that the user can get the values in the first place.

In article
It looks like we may need three types here:
* one for TAI
* one for POSIX time, which is a broken encoding of UTC
* one for correct encoding of UTC
This is the sort of thing I mean: newtype TAITime = TAITime Integer deriving (...) newtype POSIXTime = POSIXTime Integer deriving (...) newtype DiffTime = DiffTime Integer deriving (...) type JulianDay = Integer data UTCTime = UTCTime JulianDay DiffTime getCurrentTime :: IO POSIXTime posixToUTCTime :: POSIXTime -> UTCTime -- Ashley Yakeley, Seattle WA

Ashley Yakeley
Here are some suggestions. Not all of them are good ones:
Let me add more: * Integer, where the unit is implementation-dependent, available as ticksPerSecond (probably a nanosecond on common hardware, but without ruling out other resolution in specialized circumstances or in future). * A new type, internally implemented like the above, but which behaves as a fractional number of seconds. Both variants are applicable to both absolute times and time durations. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

It might be nice to have an overloaded TimeDiff: class TimeDiff a where diff :: TimeStamp -> TimeStamp -> a instance TimeDiff Double where... instance TimeDiff Integer where... with the difference being seconds. So if I'm working on a fine scale, I could use the Double instance (and remember to check the timer resolution), while on a greater scale, I could get an unlimited and accurate Integer number of seconds. (It would also be nice if two different invocations of getTimeStamp would always return different values, but that is perhaps difficult to implement/guarantee?) -kzm -- If I haven't seen further, it is by standing in the footprints of giants
participants (7)
-
Aaron Denney
-
Ashley Yakeley
-
Ben Rudiak-Gould
-
Ketil Malde
-
Marcin 'Qrczak' Kowalczyk
-
Peter Simons
-
Seth Kurtzberg