
I recently defied my supervisor and used Haskell to write my coursework instead of C. All went well until I needed floating point and started having odd results. As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples: *Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4 I assume this is a result of the discrepancy between binary and decimal representations of the numbers. Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True This is with GHC 6.4.1 and GCC 4.0.3 Thanks, Jamie

Hi,
*Main> 0.2 + 0.1 0.30000000000000004
Prelude> (0.1 :: Float) + (0.2 :: Float) 0.3 Prelude> (0.1 :: Double) + (0.2 :: Double) 0.30000000000000004 Prelude> (0.1 :: Float) + 0.2 == 0.3 True If you use Float's instead of doubles, it stores less precision, and so gets it right. Also, if what you really want is decimals, why not: data Decimal = Decimal {dps :: Integer, value :: Integer} Then define your Num instance on Decimal, and have perfect high fidelity numbers throughout. Thanks Neil

This is more of a haskell-café than a ghc users question, but I'll put some thoughts here: On 2006-08-30 at 19:38BST Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C.
If the object of the course is to teach you (more about) C, that might not go down too well :-)
All went well until I needed floating point and started having odd results.
My first experience with floating point numbers (some time in the middle of the last half of the last century...) was so discouraging that I decided to avoid them as much as possible. They really are very badly behaved.
As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers.
That and the finiteness of the representations.
Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
You can have that, like this:
*MyIdiom> 0.1 + 0.2 == 0.3
False
*MyIdiom> :m + Ratio
*MyIdiom Ratio> 0.1 + 0.2 == (0.3::Rational)
True
but doing something like that is really the only reliable
way. If you are working with floating point, equality is
something you should never test for; you should check to see
if the difference between the numbers is smaller than some
acceptable epsilon.
Incidentally, if I run
#include

If the object of the course is to teach you (more about) C, that might not go down too well :-)
Its on computer aided research in maths. The choice of language is ours but the staff refuse to help with any project not written in C. I'm not sure what we're supposed to be learning but Haskell has seemed far more suitable than C for all the work I've done so far - I can't imagine doing the questions on set theory in C.
You can always define an infix version of == (maybe ~=~ or something) that is a bit sloppier in its comparisons, of course.
I think that may be the best solution in the short term. I'll look in Data.Ratio too, though it's not too useful here as I'm doing numerical integration. Most of the time the functions passed in will have exponentials, which means I'll always end up using floating point.
Like the other respondents said, floating point is nasty, best avoided when possible, and used with caution plus the advice of numerical experts when unavoidable.
I agree entirely. Unfortunately I'm supposed to be an expert. Thanks for your help. Jamie

On Wed, 2006-08-30 at 22:29 +0100, Jamie Brandon wrote:
If the object of the course is to teach you (more about) C, that might not go down too well :-)
Its on computer aided research in maths.
[]
You can always define an infix version of == (maybe ~=~ or something) that is a bit sloppier in its comparisons, of course.
Congratulations! You have just discovered Constructivist Mathematics :) -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: http://felix.sf.net

On Wed, Aug 30, 2006 at 07:38:35PM +0100, Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C. All went well until I needed floating point and started having odd results. As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers. Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
This is with GHC 6.4.1 and GCC 4.0.3
The trouble here is that ghci is printing more digits than it really ought to be printing. There's no language I'm aware of in which 0.1 + 0.2 == 0.3 = True. With modern processors and compilers there's not even a guarantee (although the C language makes such a guarantee--it's just not obeyed) that double a = 0.1*0.3 + 0.2; if (a != 0.1*0.3 + 0.2) exit(1); will work properly. i.e. if you stick enough code between the definition of the variable and the comparison, this check may be failed. The reason is that intermediate results are commonly stored at higher than double precision, leading to non-IEEE arithmetic. It's sad, but we're stuck with it, as I'm not aware of any compiler that is capable of generating IEEE arithmetic. Note that in Haskell you at least have the option of using the Rational data type, which *will* give you exact arithmetic, as long as you don't need transcendental functions. -- David Roundy

On Wed, 2006-08-30 at 14:58 -0400, David Roundy wrote:
It's sad, but we're stuck with it, as I'm not aware of any compiler that is capable of generating IEEE arithmetic.
Gcc man page: -ffast-math Sets -fno-math-errno, -funsafe-math-optimizations, -fno-trap‐ ping-math, -ffinite-math-only, -fno-rounding-math, -fno-signal‐ ing-nans and fcx-limited-range. This option causes the preprocessor macro "__FAST_MATH__" to be defined. This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math func‐ tions. Exact portable calculations are the default: you have to enable non-IEEE/ISO C conformance to get good performance explicitly with a switch. -- John Skaller <skaller at users dot sf dot net> Felix, successor to C++: http://felix.sf.net

* > On Wed, 2006-08-30 at 14:58 -0400, David Roundy wrote:
It's sad, but we're stuck with it, as I'm not aware of any compiler that is capable of generating IEEE arithmetic.
Gcc man page:
-ffast-math
You quoted the wrong paragraph. Here's the right one: `-ffloat-store' Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory. This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a `double' is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use `-ffloat-store' for such programs, after modifying them to store all pertinent intermediate computations into variables.

On Thu, Aug 31, 2006 at 07:44:33AM +0200, Florian Weimer wrote:
* > On Wed, 2006-08-30 at 14:58 -0400, David Roundy wrote:
It's sad, but we're stuck with it, as I'm not aware of any compiler that is capable of generating IEEE arithmetic.
Gcc man page:
-ffast-math
You quoted the wrong paragraph. Here's the right one:
`-ffloat-store' Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a `double' is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use `-ffloat-store' for such programs, after modifying them to store all pertinent intermediate computations into variables.
But alas, in my experience even -ffloat-store doesn't allow truly reproducible arithmetic, although it's much better than the default behavior. I struggled with this quite a while, a few years back, when trying to implement tests that my parallelization would produce bitwise identical results to the serial version. I needed both -ffloat-store and some code-hacks to keep the compiler from doing anything tricky (but don't quite remember what...). -- David Roundy

David Roundy wrote:
On Thu, Aug 31, 2006 at 07:44:33AM +0200, Florian Weimer wrote:
* > On Wed, 2006-08-30 at 14:58 -0400, David Roundy wrote:
It's sad, but we're stuck with it, as I'm not aware of any compiler that is capable of generating IEEE arithmetic.
Gcc man page:
-ffast-math
You quoted the wrong paragraph. Here's the right one:
`-ffloat-store' Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as the 68000 where the floating registers (of the 68881) keep more precision than a `double' is supposed to have. Similarly for the x86 architecture. For most programs, the excess precision does only good, but a few programs rely on the precise definition of IEEE floating point. Use `-ffloat-store' for such programs, after modifying them to store all pertinent intermediate computations into variables.
But alas, in my experience even -ffloat-store doesn't allow truly reproducible arithmetic, although it's much better than the default behavior. I struggled with this quite a while, a few years back, when trying to implement tests that my parallelization would produce bitwise identical results to the serial version. I needed both -ffloat-store and some code-hacks to keep the compiler from doing anything tricky (but don't quite remember what...).
Nowadays -mfpmath=sse is better than -ffloat-store, because SSE2 has single and double-precision floating point arithmetic. I get pretty reproducible arithmetic on x86_64 this way, where SSE2 is the default. Cheers, Simon

On Fri, Sep 01, 2006 at 10:11:30AM +0100, Simon Marlow wrote:
Nowadays -mfpmath=sse is better than -ffloat-store, because SSE2 has single and double-precision floating point arithmetic. I get pretty reproducible arithmetic on x86_64 this way, where SSE2 is the default.
Thanks for the tip! -- David Roundy

On Aug 30, 2006, at 14:58 , David Roundy wrote:
On Wed, Aug 30, 2006 at 07:38:35PM +0100, Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C. All went well until I needed floating point and started having odd results. As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers. Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
This is with GHC 6.4.1 and GCC 4.0.3
The trouble here is that ghci is printing more digits than it really ought to be printing.
No, I don't think it is. Ghci is printing the number that is closest of all numbers in decimal notation to the Double in question (i.e., 0.1+0.2). Printing it with fewer decimals would yield a different number if it was read back. -- Lennart

On Aug 30, 2006, at 6:04 PM, Lennart Augustsson wrote:
On Aug 30, 2006, at 14:58 , David Roundy wrote:
On Wed, Aug 30, 2006 at 07:38:35PM +0100, Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C. All went well until I needed floating point and started having odd results. As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers. Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
This is with GHC 6.4.1 and GCC 4.0.3
The trouble here is that ghci is printing more digits than it really ought to be printing.
No, I don't think it is. Ghci is printing the number that is closest of all numbers in decimal notation to the Double in question (i.e., 0.1+0.2). Printing it with fewer decimals would yield a different number if it was read back.
I always wondered why we didn't instead ask for "the number that has the fewest digits of significand which converts to the Double in question." Of course, for doubles with a single ulp of difference, that's still an awfully long decimal. I feel like I looked into this once when I was trying to understand the bignum-heavy Read instance for Double in the report, and ended up with a nasty headache and some fixed-point code which used cute hacks and seemed to work with limited testing and the vagaries of gcc as a back end. -Jan-Willem Maessen
-- Lennart
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users

On Aug 30, 2006, at 20:44 , Jan-Willem Maessen wrote:
On Aug 30, 2006, at 6:04 PM, Lennart Augustsson wrote:
On Aug 30, 2006, at 14:58 , David Roundy wrote:
On Wed, Aug 30, 2006 at 07:38:35PM +0100, Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C. All went well until I needed floating point and started having odd results. As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers. Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
This is with GHC 6.4.1 and GCC 4.0.3
The trouble here is that ghci is printing more digits than it really ought to be printing.
No, I don't think it is. Ghci is printing the number that is closest of all numbers in decimal notation to the Double in question (i.e., 0.1+0.2). Printing it with fewer decimals would yield a different number if it was read back.
I always wondered why we didn't instead ask for "the number that has the fewest digits of significand which converts to the Double in question." Of course, for doubles with a single ulp of difference, that's still an awfully long decimal.
The reading and printing of floating point numbers in Haskell is patterned after two PLDI papers (doing it in Scheme). I think the code has exactly the property you ask for, if you expect conversion from Double to decimal to be sensible.
I feel like I looked into this once when I was trying to understand the bignum-heavy Read instance for Double in the report, and ended up with a nasty headache and some fixed-point code which used cute hacks and seemed to work with limited testing and the vagaries of gcc as a back end.
I'm sure it can be done with less bignums, but it's quite tricky to get right even with bignums. -- Lennart

On Wed, Aug 30, 2006 at 06:04:47PM -0400, Lennart Augustsson wrote:
On Aug 30, 2006, at 14:58 , David Roundy wrote:
The trouble here is that ghci is printing more digits than it really ought to be printing.
No, I don't think it is. Ghci is printing the number that is closest of all numbers in decimal notation to the Double in question (i.e., 0.1+0.2). Printing it with fewer decimals would yield a different number if it was read back.
Then I guess the problem is that the output of show isn't appropriate for human interaction? In many cases it's nice for (read . show) to be identity, but I don't prefer this for floating point numbers. If I want a bitwise accurate output of a Double, I'll dump a binary. When I output decimal, I generally want something friendly to the human who's reading it, which to me means outputting only significant digits (which is admittedly an ill-defined concept). -- David Roundy

* David Roundy:
definition of the variable and the comparison, this check may be failed. The reason is that intermediate results are commonly stored
"commonly" = "on legacy i386 platforms"
at higher than double precision, leading to non-IEEE arithmetic.
AFAIK, GCC's behavior is not in itself a contradiction to IEEE rules.
It's sad, but we're stuck with it, as I'm not aware of any compiler that is capable of generating IEEE arithmetic.
It's more a language definition issue. Ada, for instance, uses a model based on interval arithmetic, and the usual i386 extended precision arithmetic with occasional spilling still matches its requirements: 14. For any predefined relation on operands of a floating point type T, the implementation may deliver any value (i.e., either True or False) obtained by applying the (exact) mathematical comparison to values arbitrarily chosen from the respective operand intervals. In addition, there are C compilers which are not affected by this problem, even on the i386 architecture: Some of them use SSE2 arithmetic, some run the FPU in 64-bit mode (instead of 80-bit mode, very common on Windows). Or you can spill long doubles instead of doubles where necessary, or spill-and-reload early so that there is no observable difference (I believe Intel's compiler does one of these).

Slightly tongue in cheek, I think the real problem is that your courses come in the wrong order. No one should use floating point numbers without first having a course in numerical analysis. :) -- Lennart On Aug 30, 2006, at 14:38 , Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C. All went well until I needed floating point and started having odd results. As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers. Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
This is with GHC 6.4.1 and GCC 4.0.3
Thanks, Jamie
_______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
participants (9)
-
David Roundy
-
Florian Weimer
-
Jamie Brandon
-
Jan-Willem Maessen
-
Jon Fairbairn
-
Lennart Augustsson
-
Neil Mitchell
-
Simon Marlow
-
skaller