RE: Concerning Time.TimeDiff

Well, my main problem with representing time as a pair, is that a point in time isn't uniquely defined (e.g. should it be 3s + 5e-11ps, or 2s + 5e-11ps), and that, in most suggested formats, they overlap (e.g. 3s or 2s + 1e12 ps). So you need to normalize -- possibly after each operation. Are you sure this is more efficient than using bignums?
Further, I'm not sure Haskell has a standard 64 bit Int. I'm fairly sure it's in GHC, and probably in the other compilers, but it'd be nice if it were in the standard. On the other hand, Integer can be implemented any way that's efficient on each architecture.
Haskell does have a standard 64 bit Int, known as Int64. (standard in the sense that the FFI addendum defines it, which is almost but not quite as standard as Haskell 98). If there are performance concerns, then we should measure both the version with an Int64 pair and the version with Integer. My guess is that there won't be enough of a difference for it to matter.
I find it hard to lose sleep over missing leap-seconds and dates beyond
Leap seconds are a completely orthogonal issue. From the discussion here, it seems plain that UTC is a kluge, and TAI is the way to go. In any case, we already agree to counting seconds, the question is how to count them :-)
UTC is only a kludge for the fact that the Earth wobbles a bit on its trajectory through space - I think that's a reasonable kludge :-) POSIX "seconds since the epoch" is a *real* kludge. Let's not get the two confused. Cheers, Simon

"Simon Marlow"
Haskell does have a standard 64 bit Int, known as Int64.
Okay. I'm still not convinced it's a good idea to tie things to a specific data type. What about CPUs with only native 32bit support, and programs that only need to manipulate small amounts of time? What about computers with a >64 bit native type?
My guess is that there won't be enough of a difference for it to matter.
...and I don't think it's entirely given that two 64 bit numbers will be faster than one Integer.
POSIX "seconds since the epoch" is a *real* kludge. Let's not get the two confused.
Sorry, my bad. -kzm -- If I haven't seen further, it is by standing in the footprints of giants

On Thursday 19 June 2003 2:59 pm, Ketil Z. Malde wrote:
What about CPUs with only native 32bit support,
All you need to implement arbitrary sized N-bit arithmetic is a CPU which provides 'add with carry', 'subtract with carry', etc. All CPUs I have every used provide these operations and I think it's a safe bet that all future CPUs will provide them.
What about computers with a >64 bit native type?
Processors which provide 2*N*8-bit arithmetic sometimes provide N*8-bit arithmetic too (e.g., 32-bit processors sometimes provide 16 bit arithmetic too) so, if their native type is a power of 2 times 64, we should be ok. For any other processors, we can implement it the way 8 and 16-bit arithmetic is currently implemented on Hugs: we coerce the operands to 32 bits, perform the 32-bit operation and coerce the operands back to the desired result size. Coercing up takes, at most, a register-register move and coercing down takes an and operation.
...and I don't think it's entirely given that two 64 bit numbers will be faster than one Integer.
I do. the implementation of Integer requires traversing additional indirections to an array of bits, reading the size of the operands, allocating space for the result, etc. It's a lot of work even if the arguments and result are simple values like 0 or 1. I suspect Integer could be optimized (by recognizing that most uses of Integer are for values that fit in 31 bits) so that the difference is pretty minimal. but this would have limited effect on a date like 19 June 2003 which is (very roughly) 2^69 picoseconds since the epoch. Which is all to say that whatever sound reasons may exist, future portability concerns or performance are not reasons to prefer using a single Integer or a Rational. -- Alastair Reid

Alistair Reid writes:
I suspect Integer could be optimized (by recognizing that most uses of Integer are for values that fit in 31 bits) so that the difference is pretty minimal. but this would have limited effect on a date like 19 June 2003 which is (very roughly) 2^69 picoseconds since the epoch.
It *is* optimised on GHC: libraries/base/GHC/Num.lhs says -- | Arbitrary-precision integers. data Integer = S# Int# -- small integers #ifndef ILX | J# Int# ByteArray# -- large integers #else | J# Void BigInteger -- .NET big ints foreign type dotnet "BigInteger" BigInteger #endif This doesn't change your basic point, though, that two Int64s will probably be faster than one Integer. --KW 8-)

It is more complicated than that. for any representation which allows denormalized values (as 2 numbers would), they would have to be normalized before most operations. you cannot compare the numbers unless you normalize them, plus every operation on them will have to check for overflow of the picoseconds field to place it into the seconds field. it would be a common source of errors and a pain to deal with correctly. nothing is keeping anyone from implementing their own notion of time based on Rationals and/or primitive numbers, if you only cared about seconds for instance then perhaps an Int64 will be just fine, and you can use that in your own code. we just need to provide some way to get at the basic time functionality of the OS in some way that works (as opposed to the current broken mess) and supply a few generally useful functions to manipulate those values. in any case, since a pair of Int64's is not a clear win and an Integer definatly is a clear win in terms of usability then I believe we should go for that. John On Thu, Jun 19, 2003 at 04:51:08PM +0100, Alastair Reid wrote:
On Thursday 19 June 2003 2:59 pm, Ketil Z. Malde wrote:
...and I don't think it's entirely given that two 64 bit numbers will be faster than one Integer.
I do. the implementation of Integer requires traversing additional indirections to an array of bits, reading the size of the operands, allocating space for the result, etc. It's a lot of work even if the arguments and result are simple values like 0 or 1.
I suspect Integer could be optimized (by recognizing that most uses of Integer are for values that fit in 31 bits) so that the difference is pretty minimal. but this would have limited effect on a date like 19 June 2003 which is (very roughly) 2^69 picoseconds since the epoch.
Which is all to say that whatever sound reasons may exist, future portability concerns or performance are not reasons to prefer using a single Integer or a Rational.
-- --------------------------------------------------------------------------- John Meacham - California Institute of Technology, Alum. - john@foo.net ---------------------------------------------------------------------------

in any case, since a pair of Int64's is not a clear win and an Integer definatly is a clear win in terms of usability then I believe we should go for that.
It's a radical idea but... Maybe we could avoid all these issues by using an abstract type with all sensible instances (including Num though it is only semi-sensible for times). Internally, we might use Integer or two Int64s or we might even add Int128 to our compilers. Externally it looks like an Integral (or Fractional?) type. Programs which perform a lot of time operations will use time operations which are implemented to exploit the fast internal representation. Programs which convert times to/from Integer (or Rational) will play a small penalty but why would they be making these conversions? -- Alastair Reid

Alastair Reid
On Thursday 19 June 2003 2:59 pm, Ketil Z. Malde wrote:
What about CPUs with only native 32bit support,
All you need to implement arbitrary sized N-bit arithmetic is a CPU which provides 'add with carry', 'subtract with carry', etc. All CPUs I have every used provide these operations and I think it's a safe bet that all future CPUs will provide them.
I'm not saying it'll be impossible to implement, just that if you only have shorter operands, any opertaion on 64-bit entities must be implemented as a much larger number of operations (around 64 muls and 16 adds if you use byte operations¹)
What about computers with a >64 bit native type?
Processors which provide 2*N*8-bit arithmetic sometimes provide N*8-bit arithmetic too
...again, it would be *possible*, but not efficient. On the other hand, Integers will be optimized to use whatever features available on a specific platform; it could use a single byte for small numbers in the first case, or it could use the full 128 bit operations for large numbers. I'm not saying it's a very relevant problem; and I'd be the first to agree that most CPUs now and in the future provide 64- or at least 32-bit operations (and probably not 128-bit ones).
I suspect Integer could be optimized (by recognizing that most uses of Integer are for values that fit in 31 bits) so that the difference is pretty minimal.
GMP (which I thought was used to implement Integer) is supposed to be pretty good.
But this would have limited effect on a date like 19 June 2003 which is (very roughly) 2^69 picoseconds since the epoch.
Yes. And, as I mentioned previously, I don't think any application spends a significant amount of time computing times. The focus should be on what't practical and simple from an API point of view, rather than efficiency. IMHO. -kzm ¹ OK, so that's fairly naive, and can surely be implemented more efficiently. But who multiplies times, anyway? Or, for that matter, runs Haskell on an 8-bit CPU? :-) -- If I haven't seen further, it is by standing in the footprints of giants
participants (5)
-
Alastair Reid
-
John Meacham
-
Keith Wansbrough
-
ketil@ii.uib.no
-
Simon Marlow