
Hello, There were some recent discussions on the floating point support in Haskell and some not-so-pleasant "surprises" people encountered. There is an Eq instance defined for these types! So I tried this: *Main> sqrt (10.0) ==3.1622776601683795 True *Main> sqrt (10.0) ==3.16227766016837956 True *Main> sqrt (10.0) ==3.1622776601683795643 True *Main> sqrt (10.0) ==3.16227766016837956435443343 True It seems strange. So my doubts are: 1. I wonder how the Eq instance is defined in case of floating point types in Haskell? 2. Can the Eq instance for floating point types be considered "meaningful"? If yes, how? In general, programmers are **advised** not to base conditional branching on tests for **equality** of two floating point values. 3. Is this particular behaviour GHC specific? (I am using GHC 6.12.1) If there are references on this please share. Thanks and regards, -Damodar Kulkarni

On ghc 7.6.3: Prelude> 3.16227766016837956 3.1622776601683795 So if you specify a number with greater-than-available precision, it will be truncated. This isn't an issue with (==), but with the necessary precision limitations of Double. On Fri, 20 Sep 2013, damodar kulkarni wrote:
Hello, There were some recent discussions on the floating point support in Haskell and some not-so-pleasant "surprises" people encountered.
There is an Eq instance defined for these types!
So I tried this: *Main> sqrt (10.0) ==3.1622776601683795 True *Main> sqrt (10.0) ==3.16227766016837956 True *Main> sqrt (10.0) ==3.1622776601683795643 True *Main> sqrt (10.0) ==3.16227766016837956435443343 True
It seems strange.
So my doubts are: 1. I wonder how the Eq instance is defined in case of floating point types in Haskell? 2. Can the Eq instance for floating point types be considered "meaningful"? If yes, how? In general, programmers are **advised** not to base conditional branching on tests for **equality** of two floating point values. 3. Is this particular behaviour GHC specific? (I am using GHC 6.12.1)
If there are references on this please share.
Thanks and regards, -Damodar Kulkarni
-- Scott Lawrence

Ok, let's say it is the effect of truncation. But then how do you explain
this?
Prelude> sqrt 10.0 == 3.1622776601683795
True
Prelude> sqrt 10.0 == 3.1622776601683796
True
Here, the last digit **within the same precision range** in the fractional
part is different in the two cases (5 in the first case and 6 in the second
case) and still I am getting **True** in both cases.
So the truncation rules seem to be elusive, to __me__.
And also observe the following:
Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.0
False
Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.000000000000002
True
Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.000000000000003
False
Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.000000000000001
True
Prelude>
Ok, again something like truncation or rounding seems at work but the
precision rules the GHC is using seem to be elusive, to me.
(with GHC version 7.4.2)
But more importantly, if one is advised NOT to test equality of two
floating point values, what is the point in defining an Eq instance?
So I am still confused as to how can one make a *meaningful sense* of the
Eq instance?
Is the Eq instance there just to make __the floating point types__ members
of the Num class?
Thanks and regards,
-Damodar Kulkarni
On Fri, Sep 20, 2013 at 5:22 PM, Scott Lawrence
On ghc 7.6.3:
Prelude> 3.16227766016837956 3.1622776601683795
So if you specify a number with greater-than-available precision, it will be truncated. This isn't an issue with (==), but with the necessary precision limitations of Double.
On Fri, 20 Sep 2013, damodar kulkarni wrote:
Hello,
There were some recent discussions on the floating point support in Haskell and some not-so-pleasant "surprises" people encountered.
There is an Eq instance defined for these types!
So I tried this: *Main> sqrt (10.0) ==3.1622776601683795 True *Main> sqrt (10.0) ==3.16227766016837956 True *Main> sqrt (10.0) ==3.1622776601683795643 True *Main> sqrt (10.0) ==3.16227766016837956435443343 True
It seems strange.
So my doubts are: 1. I wonder how the Eq instance is defined in case of floating point types in Haskell? 2. Can the Eq instance for floating point types be considered "meaningful"? If yes, how? In general, programmers are **advised** not to base conditional branching on tests for **equality** of two floating point values. 3. Is this particular behaviour GHC specific? (I am using GHC 6.12.1)
If there are references on this please share.
Thanks and regards, -Damodar Kulkarni
-- Scott Lawrence

On Fri, Sep 20, 2013 at 09:47:24PM +0530, damodar kulkarni wrote:
Ok, let's say it is the effect of truncation. But then how do you explain this?
Prelude> sqrt 10.0 == 3.1622776601683795 True Prelude> sqrt 10.0 == 3.1622776601683796 True
Here, the last digit **within the same precision range** in the fractional part is different in the two cases (5 in the first case and 6 in the second case) and still I am getting **True** in both cases.
What do you mean the "same precision range"? Notice: Prelude> 3.1622776601683795 == 3.1622776601683796 True Prelude> 3.1622776601683795 == 3.1622776601683797 True Prelude> 3.1622776601683795 == 3.1622776601683798 False The truncation happens base 2, not base 10. Is that what's confusing you? Tom

On Fri, Sep 20, 2013 at 12:17 PM, damodar kulkarni
Ok, let's say it is the effect of truncation. But then how do you explain this?
Prelude> sqrt 10.0 == 3.1622776601683795 True Prelude> sqrt 10.0 == 3.1622776601683796 True
Because there's no reliable difference there. The truncation is in bits (machine's binary representation) NOT decimal digits. A difference of 1 in the final digit is probably within a bit that gets truncated. I suggest you study IEEE floating point a bit. Also, study why computers do not generally store anything like full precision for real numbers. (Hint: you *cannot* store random real numbers in finite space. Only rationals are guaranteed to be storable in their full precision; irrationals require infinite space, unless you have a very clever representation that can store in terms of some operation like sin(x) or ln(x).) -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

On 2013-09-20 18:31, Brandon Allbery wrote: [--snip--]
unless you have a very clever representation that can store in terms of some operation like sin(x) or ln(x).)
I may just be hallucinating, but I think this is called "describable numbers", i.e. numbers which can described by some (finite) formula. Not sure how useful they would be in practice, though :).

On Sat, Sep 21, 2013 at 12:35 PM, Bardur Arantsson
On 2013-09-20 18:31, Brandon Allbery wrote: [--snip--]
unless you have a very clever representation that can store in terms of some operation like sin(x) or ln(x).)
I may just be hallucinating, but I think this is called "describable numbers", i.e. numbers which can described by some (finite) formula.
Not sure how useful they would be in practice, though :).
I was actually reaching toward a more symbolic representation, like what Mathematica uses. -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

Sure. An interesting, if not terribly relevant, fact is that there are
more irrational numbers that we *can't* represent the above way than that
we can (IIRC).
However, those aren't actually interesting in solving the kinds of problems
we want to solve with a programming language, so it's academic, and
symbolic representation certainly gains you some things and costs you some
things in meaningful engineering kinds of ways.
On Sat, Sep 21, 2013 at 9:41 AM, Brandon Allbery
On Sat, Sep 21, 2013 at 12:35 PM, Bardur Arantsson
wrote: On 2013-09-20 18:31, Brandon Allbery wrote: [--snip--]
unless you have a very clever representation that can store in terms of some operation like sin(x) or ln(x).)
I may just be hallucinating, but I think this is called "describable numbers", i.e. numbers which can described by some (finite) formula.
Not sure how useful they would be in practice, though :).
I was actually reaching toward a more symbolic representation, like what Mathematica uses.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sat, Sep 21, 2013 at 12:43 PM, David Thomas
Sure. An interesting, if not terribly relevant, fact is that there are more irrational numbers that we *can't* represent the above way than that we can (IIRC).
I think that kinda follows from diagonalization... it does handle more cases than only using rationals, but pretty much by the Cantor diagonal argument there's an infinite (indeed uncountably) number of reals that cannot be captured by any such trick. -- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

I think that's right, yeah.
On Sat, Sep 21, 2013 at 9:49 AM, Brandon Allbery
On Sat, Sep 21, 2013 at 12:43 PM, David Thomas
wrote: Sure. An interesting, if not terribly relevant, fact is that there are more irrational numbers that we *can't* represent the above way than that we can (IIRC).
I think that kinda follows from diagonalization... it does handle more cases than only using rationals, but pretty much by the Cantor diagonal argument there's an infinite (indeed uncountably) number of reals that cannot be captured by any such trick.
-- brandon s allbery kf8nh sine nomine associates allbery.b@gmail.com ballbery@sinenomine.net unix, openafs, kerberos, infrastructure, xmonad http://sinenomine.net

On Fri, Sep 20, 2013 at 6:17 PM, damodar kulkarni
Ok, let's say it is the effect of truncation. But then how do you explain this?
Prelude> sqrt 10.0 == 3.1622776601683795 True Prelude> sqrt 10.0 == 3.1622776601683796 True
Well, that's easy: λ: decodeFloat 3.1622776601683795 (7120816245988179,-51) λ: decodeFloat 3.1622776601683796 (7120816245988179,-51) On my machine, they are equal. Note that ...4 and ...7 are also equal, after they are truncated to fit in 53 (which is what `floatDigits 42.0` tells me) bits (`floatRadix 42.0 == 2`). Ok, again something like truncation or rounding seems at work but the
precision rules the GHC is using seem to be elusive, to me.
It seems to me that you're not familiar with the intricacies of floating-point arithmetic. You're not alone, it's one of the top questions on StackOverflow. Please find yourself a copy of "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg, and read it. It should be very enlightening. It explains a bit about how IEEE754, pretty much the golden standard for floating point math, defines these precision rules. But more importantly, if one is advised NOT to test equality of two
floating point values, what is the point in defining an Eq instance?
Although equality is defined in IEEE754, it's not extremely useful after arithmetic (except perhaps for zero tests). Eq is a superclass of Ord, however, which is vital to using floating point numbers. Is the Eq instance there just to make __the floating point types__ members
of the Num class?
That was also a reason before GHC 7.4 (Eq is no longer a superclass of Num).

On Fri, Sep 20, 2013 at 06:34:04PM +0200, Stijn van Drongelen wrote:
Please find yourself a copy of "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg, and read it. It should be very enlightening. It explains a bit about how IEEE754, pretty much the golden standard for floating point math, defines these precision rules.
Ah, this is definitely the best advice in the thread.

It seems to me that you're not familiar with the intricacies of floating-point arithmetic. You're not alone, it's one of the top questions on StackOverflow.
Please find yourself a copy of "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg, and read it. It should be very enlightening. It explains a bit about how IEEE754, pretty much the golden standard for floating point math, defines these precision rules.
I can imagine the following dialogue happening between His[1] Excellency,
the Lord Haskell (if I am allowed to anthropomorphize it) and me:
Me: "My Lord, I just used the (==) on floats and it gave me some unpleasant
surprises."
Lord Haskell: "You fool, why did you tested floats for equality? Don't you
know a bit about floating points?"
Me: "My Lord, I thought it'd be safe as it came with the typeclass
guarantee you give us."
Lord Haskell: "Look, you fool you scum you unenlightened filthy soul, yes I
know I gave you that Eq instance for the floating point BUT nonetheless you
should NOT have used it; NOW go enlighten yourself."
Me: "My Lord, thank you for the enlightenment."
I don't know how many people out there are being enlightened by His
Excellency, the Lord Haskell, on floating point equality and other things.
Yes, many a good old junkies, like the filthier kinkier C, were keen on
enlightening people on such issues. But, see, C is meant to be used for
such enlightenment.
Although I am not an expert on floating point numbers, the paper is not
surprising as I have learnt, at least some things given in the paper, the
hard way by burning myself a couple of times because of the floating point
thing while programming some things in the good old C.
But even the Haskell tempted to define an Eq instance for that scary thing
__that__ was a new enlightenment for me.
Life is full of opportunities to enlighten yourself.
That was also a reason before GHC 7.4 (Eq is no longer a superclass of Num).
This seems a good step forward, removing the Eq instance altogether on
floating point types would be much better; (unless as pointed out by
Brandon, "you have a very clever representation that can store (floats) in
terms of some operation like sin(x) or ln(x) (with infinite precision)")
I know I might be wrong in expecting this change as it might break a lot of
existing code. But why not daydream?
[1] Please read His/Her
Thanks and regards,
-Damodar Kulkarni
On Fri, Sep 20, 2013 at 10:04 PM, Stijn van Drongelen
On Fri, Sep 20, 2013 at 6:17 PM, damodar kulkarni
wrote: Ok, let's say it is the effect of truncation. But then how do you explain this?
Prelude> sqrt 10.0 == 3.1622776601683795 True Prelude> sqrt 10.0 == 3.1622776601683796 True
Well, that's easy:
λ: decodeFloat 3.1622776601683795 (7120816245988179,-51) λ: decodeFloat 3.1622776601683796 (7120816245988179,-51)
On my machine, they are equal. Note that ...4 and ...7 are also equal, after they are truncated to fit in 53 (which is what `floatDigits 42.0` tells me) bits (`floatRadix 42.0 == 2`).
Ok, again something like truncation or rounding seems at work but the
precision rules the GHC is using seem to be elusive, to me.
It seems to me that you're not familiar with the intricacies of floating-point arithmetic. You're not alone, it's one of the top questions on StackOverflow.
Please find yourself a copy of "What Every Computer Scientist Should Know About Floating-Point Arithmetic" by David Goldberg, and read it. It should be very enlightening. It explains a bit about how IEEE754, pretty much the golden standard for floating point math, defines these precision rules.
But more importantly, if one is advised NOT to test equality of two
floating point values, what is the point in defining an Eq instance?
Although equality is defined in IEEE754, it's not extremely useful after arithmetic (except perhaps for zero tests). Eq is a superclass of Ord, however, which is vital to using floating point numbers.
Is the Eq instance there just to make __the floating point types__ members
of the Num class?
That was also a reason before GHC 7.4 (Eq is no longer a superclass of Num).

On Fri, Sep 20, 2013 at 7:35 PM, damodar kulkarni
This seems a good step forward, removing the Eq instance altogether on floating point types would be much better; (unless as pointed out by Brandon, "you have a very clever representation that can store (floats) in terms of some operation like sin(x) or ln(x) (with infinite precision)")
Please don't. The problem isn't with the Eq instance. It does exactly
what it should - it tells you whether or not two floating point
objects are equal.
The problem is with floating point arithmetic in general. It doesn't
obey the laws of arithmetic as we learned them, so they don't behave
the way we expect. The single biggest gotcha is that two calculations
we expect to be equal often aren't. As a result of this, we warn
people not to do equality comparison on floats.
So people who don't understand that wind up asking "Why doesn't this
behave the way I expect?" Making floats not be an instance of Eq will
just cause those people to ask "Why can't I compare floats for
equality?". This will lead to pretty much the same explanation. It
will also mean that people who know what they're doing who want to do
so will have to write their own code to do it.
It also won't solve the *other* problems you run into with floating
point numbers, like unexpected zero values from the hole around zero.
Given that we have both Data.Ratio and Data.Decimal, I would argue
that removing floating point types would be better than making them
not be an instance of Eq.
It might be interesting to try and create a floating-point Numeric
type that included error information. But I'm not sure there's a good
value for the expression 1.0±0.1 < 0.9±0.1.
Note that Brandon was talking about representing irrationals exactly,
which floats don't do. Those clever representations he talks about
will do that - for some finite set of irrationals. They still won't
represent all irrationals or all rationals - like 0.1 - exactly, so
the problems will still exist. I've done microcode implementations of
floating point representations that didn't have a hole around 0. They
still don't work "right".

Making floats not be an instance of Eq will just cause those people to ask "Why can't I compare floats for equality?". This will lead to pretty much the same explanation.
Yes, and then all the torrent of explanation I got here about the intricacies of floating point operations would seem more appropriate. Then you can tell such a person "how is the demand for general notion of equality for floats tantamount to a demand for an oxymoron? because depending on various factors the notion of equality for float itself floats (sorry for the pun)." But in the given situation, such an explanation seems uncalled for as it goes like: "we have given you the Eq instance on the floating point types BUT still you are expected NOT to use it because the floating point thingy is very blah blah blah..." etc. It
will also mean that people who know what they're doing who want to do so will have to write their own code to do it.
not much of a problem with that as then it would be more like people who do unsafePerformIO, where Haskell clearly tells you that you are on your own. You might provide them `unsafePerformEqOnFloats` for instance. And then if someone complains that the `unsafePerformEqOnFloats` doesn't test for equality as in equality, by all means flood them with "you asked for it, you got it" type messages and the above mentioned explanations about the intricacies of floating point operations. Given that we have both Data.Ratio and Data.Decimal, I would argue
that removing floating point types would be better than making them not be an instance of Eq.
This seems better. Let people have the support for floating point types in
some other libraries IF at all they want to have them but then it would
bear no burden on the Num typeclass and more importantly on the users of
the Num class.
In this case, such people might implement their __own__ notion of equality
for floating points. And if they intend to do such a thing, then it would
not be much of an issue to expect from them the detailed knowledge of all
the intricacies of handling equality for floating points... as anyway they
themselves are asking for it and they are NOT relying on the Haskell's Num
typeclass for it.
Thanks and regards,
-Damodar Kulkarni
On Sat, Sep 21, 2013 at 9:46 AM, Mike Meyer
On Fri, Sep 20, 2013 at 7:35 PM, damodar kulkarni
wrote: This seems a good step forward, removing the Eq instance altogether on floating point types would be much better; (unless as pointed out by Brandon, "you have a very clever representation that can store (floats) in terms of some operation like sin(x) or ln(x) (with infinite precision)")
Please don't. The problem isn't with the Eq instance. It does exactly what it should - it tells you whether or not two floating point objects are equal.
The problem is with floating point arithmetic in general. It doesn't obey the laws of arithmetic as we learned them, so they don't behave the way we expect. The single biggest gotcha is that two calculations we expect to be equal often aren't. As a result of this, we warn people not to do equality comparison on floats.
So people who don't understand that wind up asking "Why doesn't this behave the way I expect?" Making floats not be an instance of Eq will just cause those people to ask "Why can't I compare floats for equality?". This will lead to pretty much the same explanation. It will also mean that people who know what they're doing who want to do so will have to write their own code to do it.
It also won't solve the *other* problems you run into with floating point numbers, like unexpected zero values from the hole around zero.
Given that we have both Data.Ratio and Data.Decimal, I would argue that removing floating point types would be better than making them not be an instance of Eq.
It might be interesting to try and create a floating-point Numeric type that included error information. But I'm not sure there's a good value for the expression 1.0±0.1 < 0.9±0.1.
Note that Brandon was talking about representing irrationals exactly, which floats don't do. Those clever representations he talks about will do that - for some finite set of irrationals. They still won't represent all irrationals or all rationals - like 0.1 - exactly, so the problems will still exist. I've done microcode implementations of floating point representations that didn't have a hole around 0. They still don't work "right".

I think you are trying to solve a problem that doesn't exist. * Float and Double are imprecise types by their very nature. That's exactly what people are forgetting, and exactly what's causing misunderstandings. Perhaps(!) it would be better to remove the option to use rational literals as floats, and require people to convert rationals using approx :: (Approximates b a) => a -> b when they want to use FP math (instance Approximates Float Rational, etc). * Pure equality tests make perfect sense in a few situations, so Eq is required. In fact, it's required to have an IEEE754-compliant implementation. * As mentioned, there is a total order (Ord) on floats (which is what you should be using when checking whether two approximations are approximately equal), which implies that there is also an equivalence relation (Eq).

On 21 September 2013 08:34, Stijn van Drongelen
* As mentioned, there is a total order (Ord) on floats (which is what you should be using when checking whether two approximations are approximately equal), which implies that there is also an equivalence relation (Eq).
how do you get a total order when nan compares false with everything including itself?

On Sep 21, 2013 9:38 AM, "Colin Adams"
On 21 September 2013 08:34, Stijn van Drongelen
wrote: * As mentioned, there is a total order (Ord) on floats (which is what
you should be using when checking whether two approximations are approximately equal), which implies that there is also an equivalence relation (Eq).
how do you get a total order when nan compares false with everything
including itself? Good point. It should be a partial order.

Let me quibble.
* Float and Double are imprecise types by their very nature. That's exactly what people are forgetting, and exactly what's causing misunderstandings.
Float and Double are precise types. What is imprecise is the correspondence between finite precision floating point types (which are common to all programming languages) and the mathematical real numbers. This imprecision is manifest in failures of the intended homomorphism from the reals to the floating point types.

On 2013-09-21 06:16, Mike Meyer wrote:
The single biggest gotcha is that two calculations we expect to be equal often aren't. As a result of this, we warn people not to do equality comparison on floats.
The Eq instance for Float violates at least one expected law of Eq: Prelude> let nan = 0/0 Prelude> nan == nan False There was a proposal to change this, but it didn't really go anywhere. See: http://permalink.gmane.org/gmane.comp.lang.haskell.libraries/16218 (FWIW, even if the instances cannot be changed/removed, I'd love to see some sort of explicit opt-in before these dangerous/suprising instances become available.) Regards,

On Sat, Sep 21, 2013 at 2:21 AM, Bardur Arantsson
On 2013-09-21 06:16, Mike Meyer wrote:
The single biggest gotcha is that two calculations we expect to be equal often aren't. As a result of this, we warn people not to do equality comparison on floats. The Eq instance for Float violates at least one expected law of Eq:
Prelude> let nan = 0/0 Prelude> nan == nan False
Yeah, Nan's are a whole 'nother bucket of strange.
But if violating an expected law of a class is a reason to drop it as
an instance, consider:
Prelude> e > 0
True
Prelude> 1 + e > 1
False
Of course, values "not equal when you expect them to be" breaking
equality means that they also don't order the way you expect:
Prelude> e + e + 1 > 1 + e + e
True
So, should Float's also not be an instance of Ord?
I don't think you can turn IEEE 754 floats into a well-behaved numeric
type. A wrapper around a hardware type for people who want that
performance and can deal with its quirks should provide access to
as much of the types behavior as possible, and equality comparison
is part of IEEE 754 floats.

On Sat, Sep 21, 2013 at 10:26 AM, Mike Meyer
On Sat, Sep 21, 2013 at 2:21 AM, Bardur Arantsson
wrote: On 2013-09-21 06:16, Mike Meyer wrote:
The single biggest gotcha is that two calculations we expect to be equal often aren't. As a result of this, we warn people not to do equality comparison on floats. The Eq instance for Float violates at least one expected law of Eq:
Prelude> let nan = 0/0 Prelude> nan == nan False
Yeah, Nan's are a whole 'nother bucket of strange.
But if violating an expected law of a class is a reason to drop it as an instance, consider:
Prelude> e > 0 True Prelude> 1 + e > 1 False
Of course, values "not equal when you expect them to be" breaking equality means that they also don't order the way you expect:
Prelude> e + e + 1 > 1 + e + e True
So, should Float's also not be an instance of Ord?
I don't think you can turn IEEE 754 floats into a well-behaved numeric type. A wrapper around a hardware type for people who want that performance and can deal with its quirks should provide access to as much of the types behavior as possible, and equality comparison is part of IEEE 754 floats.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I do have to agree with Damodar Kulkarni that different laws imply different classes. However, this will break **a lot** of existing software. If we would do this, only Eq and Ord need to be duplicated, as they cause most of the problems. Qualified imports should suffice to differentiate between the two. import qualified Data.Eq.Approximate as A import qualified Data.Ord.Approximate as A main = print $ 3.16227766016837956 A.== 3.16227766016837955

On 2013-09-21, at 4:46 AM, Stijn van Drongelen
I do have to agree with Damodar Kulkarni that different laws imply different classes. However, this will break **a lot** of existing software.
You could argue that the existing software is already broken.
If we would do this, only Eq and Ord need to be duplicated, as they cause most of the problems. Qualified imports should suffice to differentiate between the two.
import qualified Data.Eq.Approximate as A import qualified Data.Ord.Approximate as A
main = print $ 3.16227766016837956 A.== 3.16227766016837955
As soon as you start doing computations with fp numbers things get much worse. Something like Edward Kmett's Numeric.Interval package would likely be helpful, a start at least (and the comments in the Numeric.Interval documentation are amusing) In the distant past when I was worried about maintaining accuracy in a solids modeller we went with an interval arithmetic library that we *carefully* implemented. It worked. Unpleasant in C, but it worked. And this link might be interesting: http://lambda-the-ultimate.org/node/1301 Cheers, Bob

On Sep 21, 2013 4:17 PM, "Bob Hutchison"
On 2013-09-21, at 4:46 AM, Stijn van Drongelen
wrote: I do have to agree with Damodar Kulkarni that different laws imply
different classes. However, this will break **a lot** of existing software.
You could argue that the existing software is already broken.
I agree, but that might also be hardly relevant when fixing an existing language.
If we would do this, only Eq and Ord need to be duplicated, as they cause most of the problems. Qualified imports should suffice to differentiate between the two.
import qualified Data.Eq.Approximate as A import qualified Data.Ord.Approximate as A
main = print $ 3.16227766016837956 A.== 3.16227766016837955
As soon as you start doing computations with fp numbers things get much worse.
Only when you start reasoning about (in)equalities. Really, in (a + b) * c = a * c + b * c, it isn't + or * that's causing problems, but =. I'm going to look at Kmett's work and that ltu link when I'm home ;)

On Sep 21, 2013 9:17 AM, "Bob Hutchison"
On 2013-09-21, at 4:46 AM, Stijn van Drongelen
wrote: I do have to agree with Damodar Kulkarni that different laws imply different classes. However, this will break **a lot** of existing software. You could argue that the existing software is already broken.
I'd argue that it's not all broken, and you're breaking it all.
If we would do this, only Eq and Ord need to be duplicated, as they cause most of the problems. Qualified imports should suffice to differentiate between the two. import qualified Data.Eq.Approximate as A import qualified Data.Ord.Approximate as A
main = print $ 3.16227766016837956 A.== 3.16227766016837955 As soon as you start doing computations with fp numbers things get much worse.
Exactly. The Eq and Ord instances aren't what's broken, at least when you're dealing with numbers (NaNs are another story). That there are pairs of non-zero numbers that when added result in one of the two numbers is broken. That addition isn't associative is broken. That expressions don't obey the substitution principle is broken. But you can't tell these things are broken until you start comparing values. Eq and Ord are just the messengers.

On 2013-09-21 23:08, Mike Meyer wrote:
Exactly. The Eq and Ord instances aren't what's broken, at least when you're dealing with numbers (NaNs are another story). That there are pairs
According to Haskell NaN *is* a number.
Eq and Ord are just the messengers.
No. When we declare something an instance of Monad or Applicative (for example), we expect(*) that thing to obey certain laws. Eq and Ord instances for Float/Double do *not* obey the expected laws. Regards, /b (*) Alas, in general, the compiler cannot prove these things, so we rely on assertion or trust.

On 2013-09-21 23:08, Mike Meyer wrote:
Exactly. The Eq and Ord instances aren't what's broken, at least when you're dealing with numbers (NaNs are another story). That there are
On Sat, Sep 21, 2013 at 5:28 PM, Bardur Arantsson
According to Haskell NaN *is* a number.
Trying to make something whose name is "Not A Number" act like a number sounds broken from the start.
Eq and Ord are just the messengers. No. When we declare something an instance of Monad or Applicative (for example), we expect(*) that thing to obey certain laws. Eq and Ord instances for Float/Double do *not* obey the expected laws.
I just went back through the thread, and the only examples I could
find where that happened (as opposed to where floating point
calculations or literals resulted in unexpected values) was with
NaNs. Just out of curiosity, do you know of any that don't involve
NaNs?
Float violates the expected behavior of instances of - well, pretty
much everything it's an instance of. Even if you restrict yourself to
working with integer values that can be represented as floats. If
we're going to start removing it as an instance for violating instance
expectations, we might as well take it out of the numeric stack (or
the language) completely.

i had a longer email written out, but decided a shorter one is better.
I warmly point folks to use libs like the numbers package on hackage
http://hackage.haskell.org/packages/archive/numbers/2009.8.9/doc/html/Data-N...
it has some great alternatives to standard floats and doubles.
the big caveat, however, is all your computations will be painfully slower
by several orders of magnitude. And sometimes thats a great tradeoff! but
sometimes it isnt. At the end of the day, you need to understand how to do
math on the computer in a fashion that accepts that there is going to be
finite precision. there is no alternative but to work with that
understanding.
numbers on the computer have many forms. and many tradeoffs. there is no
one true single best approach.
cheers
-Carter
On Sat, Sep 21, 2013 at 10:11 PM, Mike Meyer
On 2013-09-21 23:08, Mike Meyer wrote:
Exactly. The Eq and Ord instances aren't what's broken, at least when you're dealing with numbers (NaNs are another story). That there are
On Sat, Sep 21, 2013 at 5:28 PM, Bardur Arantsson
wrote: pairs According to Haskell NaN *is* a number.
Trying to make something whose name is "Not A Number" act like a number sounds broken from the start.
Eq and Ord are just the messengers. No. When we declare something an instance of Monad or Applicative (for example), we expect(*) that thing to obey certain laws. Eq and Ord instances for Float/Double do *not* obey the expected laws.
I just went back through the thread, and the only examples I could find where that happened (as opposed to where floating point calculations or literals resulted in unexpected values) was with NaNs. Just out of curiosity, do you know of any that don't involve NaNs?
Float violates the expected behavior of instances of - well, pretty much everything it's an instance of. Even if you restrict yourself to working with integer values that can be represented as floats. If we're going to start removing it as an instance for violating instance expectations, we might as well take it out of the numeric stack (or the language) completely.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

2013/9/22 Mike Meyer
On Sat, Sep 21, 2013 at 5:28 PM, Bardur Arantsson
wrote: Trying to make something whose name is "Not A Number" act like a number sounds broken from the start.
The point here is that IEEE floats are actually more something like a "Maybe Float", with various "Nothing"s, i.e. the infinities and NaNs, which all propagate in a well-defined way. Basically a monad built into your CPU's FP unit. ;-)
I just went back through the thread, and the only examples I could find where that happened (as opposed to where floating point calculations or literals resulted in unexpected values) was with NaNs. Just out of curiosity, do you know of any that don't involve NaNs?
Well, with IEEE arithmetic almost nothing you learned in school about math holds anymore. Apart from rounding errors, NaNs and infinities, -0 is another "fun" part: x * (-1) is not the same as 0 - x (Hint: Try with x == 0 and use recip on the result.)
Float violates the expected behavior of instances of - well, pretty much everything it's an instance of. Even if you restrict yourself to working with integer values that can be represented as floats. If we're going to start removing it as an instance for violating instance expectations, we might as well take it out of the numeric stack (or the language) completely.
Exactly, and I am sure 99.999% of all people wouldn't like that removal. Learn IEEE arithmetic, hate it, and deal with it. Or use something different, which is probably several magnitudes slower. :-/

On Tue, Sep 24, 2013 at 5:39 PM, Sven Panne
2013/9/22 Mike Meyer
: On Sat, Sep 21, 2013 at 5:28 PM, Bardur Arantsson
wrote: Trying to make something whose name is "Not A Number" act like a number sounds broken from the start. The point here is that IEEE floats are actually more something like a "Maybe Float", with various "Nothing"s, i.e. the infinities and NaNs, which all propagate in a well-defined way.
So, `Either IeeeFault Float`? ;)

On Tue, Sep 24, 2013 at 11:36 AM, Stijn van Drongelen
On Tue, Sep 24, 2013 at 5:39 PM, Sven Panne
wrote: 2013/9/22 Mike Meyer
: On Sat, Sep 21, 2013 at 5:28 PM, Bardur Arantsson
wrote: Trying to make something whose name is "Not A Number" act like a number sounds broken from the start.
The point here is that IEEE floats are actually more something like a "Maybe Float", with various "Nothing"s, i.e. the infinities and NaNs, which all propagate in a well-defined way.
So, `Either IeeeFault Float`? ;)
Sort of, but IeeeFault isn't really a zero. Sometimes they can get back to a normal Float value: Prelude> let x = 1.0/0 Prelude> x Infinity Prelude> 1/x 0.0 Also, IEEE float support doesn't make sense as a library, it needs to be built into the compiler (ignoring extensible compiler support via the FFI). The whole point of IEEE floats is that they're very fast, but in order to take advantage of that the compiler needs to know about them in order to use the proper CPU instructions. Certainly you could emulate them in software, but then they'd no longer be fast, so there'd be no point to it.

Ok, let's say it is the effect of truncation. But then how do you explain
Prelude> sqrt 10.0 == 3.1622776601683795 True Prelude> sqrt 10.0 == 3.1622776601683796 True
Here, the last digit **within the same precision range** in the fractional part is different in the two cases (5 in the first case and 6 in
And also observe the following:
Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.0 False Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.000000000000002 True Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.000000000000003 False Prelude> (sqrt 10.0) * (sqrt 10.0) == 10.000000000000001 True Prelude>
Ok, again something like truncation or rounding seems at work but the
On Fri, Sep 20, 2013 at 11:17 AM, damodar kulkarni
(with GHC version 7.4.2)
Here's a quick-and-dirty C program to look at the values. I purposely
print decimal digits beyond the precision range to illustrate that,
even though we started with different representations, the actual
values are the same even if you use decimal representations longer
than the ones you started with. In particular, note that 0.1 when
translated into binary is a repeating fraction. Why the last hex digit
is a instead of 9 is left as an exercise for the reader. That this
happens also means the number actually stored when you enter 0.1 is
*not* 0.1, but as close to it as you can get in the given
representation.
#include
But more importantly, if one is advised NOT to test equality of two floating point values, what is the point in defining an Eq instance? So I am still confused as to how can one make a *meaningful sense* of the Eq instance? Is the Eq instance there just to make __the floating point types__ members of the Num class?
You can do equality comparisons on floats. You just have to know what
you're doing. You also have to be aware of how NaN's (NaN's are float
values that aren't numbers, and are even odder than regular floats)
behave in your implementation, and how that affects your
application. But the same is true of doing simple arithmetic with
them.
Note that you don't have to play with square roots to see these
issues. The classic example you see near the start of any numerical
analysis class is:
Prelude> sum $ take 10 (repeat 0.1)
0.9999999999999999
Prelude> 10.0 * 0.1
1.0
This is not GHC specific, it's inherent in floating point number
representations. Read the Wikipedia section on accuracy problems
(http://en.wikipedia.org/wiki/Floating_point#Accuracy_problems) for
more information.
Various languages have done funky things to deal with these issues,
like rounding things up, or providing "fuzzy" equality. These things
generally just keep people from realizing when they've done something
wrong, so the approach taken by ghc is arguably a good one.

On 13-09-20 07:47 AM, damodar kulkarni wrote:
*Main> sqrt (10.0) ==3.1622776601683795 True [...] *Main> sqrt (10.0) ==3.16227766016837956435443343 True
This is not even specific to Haskell. Every language that provides
floating point and floating point equality does this.
(To date, P(provides floating point equality | provides floating point)
seems to be still 1.)
In the case of Haskell, where you may have a choice:
Do you want floating point > < ?
If you say yes, then you have two problems.
1. At present, Haskell puts > < under Ord, and Ord under Eq. You must
accept Eq to get Ord. If you reject this, you're asking the whole
community to re-arrange that class hierarchy just for a few types.
2. With or without your approval, one can still defy you and define:
eq x y = not_corner_case x && not_corner_case y &&
not (x

On 20/09/2013, at 11:47 PM, damodar kulkarni wrote:
There is an Eq instance defined for these types!
So I tried this: *Main> sqrt (10.0) ==3.1622776601683795 True *Main> sqrt (10.0) ==3.16227766016837956 True *Main> sqrt (10.0) ==3.1622776601683795643 True *Main> sqrt (10.0) ==3.16227766016837956435443343 True
It seems strange.
But *WHY*? There is nothing in the least strange about this! Did it occur to you to try 3.1622776601683795 == 3.16227766016837956435443343 (Hint: the answer does not begin with "F".) Four times you asked if the square root of 10 was equal to a certain (identically valued but differently written) number, and each time you got the same answer. Had any of the answers been different, that would have been shocking.
So my doubts are: 1. I wonder how the Eq instance is defined in case of floating point types in Haskell?
At least for finite numbers, the answer is "compatibly with C, Fortran, the IEEE standard, and every programming language on your machine that supports floating point arithmetic using IEEE doubles."
2. Can the Eq instance for floating point types be considered "meaningful"? If yes, how?
Except for infinities and NaNs, yes. As exact numerical equality. When you get into infinities and NaNs, things get trickier, but that's not at issue here. It seems to me that you may for some reason be under the impression that the 3.xxxx values you displayed have different values. As mathematical real numbers, they do. But they all round to identically the same numerical value in your computer.
In general, programmers are **advised** not to base conditional branching on tests for **equality** of two floating point values.
At least not until they understand floating point arithmetic.
3. Is this particular behaviour GHC specific? (I am using GHC 6.12.1)
No.
If there are references on this please share.
The IEEE floating point standard. The LIA-1 standard. The C99 and C11 standards. "What every computer scientist should know about floating point arithmetic" -- citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.22.6768 Most of the unpleasant surprises people have with Haskell floating point numbers are *floating point* surprises, not *Haskell* surprises.
participants (16)
-
Albert Y. C. Lai
-
Bardur Arantsson
-
Bob Hutchison
-
Brandon Allbery
-
Carter Schonwald
-
Colin Adams
-
damodar kulkarni
-
David Thomas
-
John Lato
-
Mike Meyer
-
Richard A. O'Keefe
-
Scott Lawrence
-
Stijn van Drongelen
-
Stuart A. Kurtz
-
Sven Panne
-
Tom Ellis