
well said! and thanks for taking the time to clarify some of the
imprecision in how i was trying to answer the earlier points
On Tue, Sep 23, 2014 at 3:06 AM, Richard A. O'Keefe
On 20/09/2014, at 5:26 AM, Carter Schonwald wrote:
Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
Floating point numbers are *PRECISE* numbers with approximate *OPERATIONS*. This is the way they are defined in the IEEE standard and its successors; this is the way they are defined in LIA-1 and its successors. If you do not understand that it is the OPERATIONS that are approximate, not the numbers, you have not yet begun to understand floating point arithmetic.
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
You're not talking about precision here but about approximation. And you can simply work with finite representations of algebraic numbers. I have a Smalltalk class that implements QuadraticSurds so that I can represent (1 + sqrt 5)/2 *exactly*. You can even compare QuadraticSurds with the same surd exactly. (This all works so much better in Haskell, where you can make the "5" part a type-level parameter.)
Since e is not a rational number, it's not terribly interesting that e^x (usually) isn't when x is rational.
If we want to compute with a richer range of numbers than the rationals, we can do that. We could, for example, compute with continued fractions. Haskell makes that a small matter of programming, and I expect someone has already done it.
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Exactly so. There are even people who think the representation is approximate instead of the operations! For things like doubled-double, it really matters that the numbers are precise and combine in predictable ways.
Interestingly, while floating point addition and multiplication are not associative, they are not wildly or erratically non-associative, and it is possible to reason about the results of operations.