
On Fri, Sep 20, 2013 at 6:17 PM, damodar kulkarni
Ok, let's say it is the effect of truncation. But then how do you explain this?
Prelude> sqrt 10.0 == 3.1622776601683795 True Prelude> sqrt 10.0 == 3.1622776601683796 True
Well, that's easy: λ: decodeFloat 3.1622776601683795 (7120816245988179,-51) λ: decodeFloat 3.1622776601683796 (7120816245988179,-51) On my machine, they are equal. Note that ...4 and ...7 are also equal, after they are truncated to fit in 53 (which is what `floatDigits 42.0` tells me) bits (`floatRadix 42.0 == 2`). Ok, again something like truncation or rounding seems at work but the
precision rules the GHC is using seem to be elusive, to me.
It seems to me that you're not familiar with the intricacies of floating-point arithmetic. You're not alone, it's one of the top questions on StackOverflow. Please find yourself a copy of "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg, and read it. It should be very enlightening. It explains a bit about how IEEE754, pretty much the golden standard for floating point math, defines these precision rules. But more importantly, if one is advised NOT to test equality of two
floating point values, what is the point in defining an Eq instance?
Although equality is defined in IEEE754, it's not extremely useful after arithmetic (except perhaps for zero tests). Eq is a superclass of Ord, however, which is vital to using floating point numbers. Is the Eq instance there just to make __the floating point types__ members
of the Num class?
That was also a reason before GHC 7.4 (Eq is no longer a superclass of Num).