
Ashley Yakeley
The numerator and denominator can too easily become huge, e.g. if one is computing absolute times of an event repeating in uneven intervals, without retrieving a rounded value from the system clock each time. He won't easily notice that the numbers grow out of control.
If I read you correctly, your complaint is that certain calculations would be too accurate, thus leading to large representation.
Sort of. The accuracy is of course illusory, the extra information is meaningless. It only wastes memory and computation time. If a time applies to an event which happens in a computer system, or if a time span is a measured difference between two events, or if a target time or time span is used to make a delay in a thread - there is some smallest resolution below which the bits are noise or zeroes when taken out of the system, and ignored when put into the system. The resolution depends on the method we use for measurement or delay, and on the system load. On a PC it's usually somewhere between 1us and 10ms (and maybe Linux actually has some true nanosecond-precision clocks when asked to use real-time timers, I don't know). I guess that there exists or will exist a hardware capable on running Haskell which has a better precision. It is useful to use a better precision for internal computations than is available on each individual measurement, because it reduces the effect of cumulated round-off errors. It's not useful though to have infinitely better precision: it makes no sense if computing the delay on ratios of bignums takes longer than the delay itself... -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/