
This is more of a haskell-café than a ghc users question, but I'll put some thoughts here: On 2006-08-30 at 19:38BST Jamie Brandon wrote:
I recently defied my supervisor and used Haskell to write my coursework instead of C.
If the object of the course is to teach you (more about) C, that might not go down too well :-)
All went well until I needed floating point and started having odd results.
My first experience with floating point numbers (some time in the middle of the last half of the last century...) was so discouraging that I decided to avoid them as much as possible. They really are very badly behaved.
As far as I can tell it isn't substantially affecting my results but it is rather embarrassing after slagging off C so much. Here are some examples:
*Main> 0.2 + 0.1 0.30000000000000004 *Main> 0.200000000000000 + 0.100000000000000000 0.30000000000000004 *Main> 0.3 + 0.1 0.4 *Main> 0.2 + 0.1 0.30000000000000004 *Main> it + 0.1 0.4
I assume this is a result of the discrepancy between binary and decimal representations of the numbers.
That and the finiteness of the representations.
Is there any way around? For a start, it would be nice to have a simple way to get 0.1 + 0.2 == 0.3 = True
You can have that, like this:
*MyIdiom> 0.1 + 0.2 == 0.3
False
*MyIdiom> :m + Ratio
*MyIdiom Ratio> 0.1 + 0.2 == (0.3::Rational)
True
but doing something like that is really the only reliable
way. If you are working with floating point, equality is
something you should never test for; you should check to see
if the difference between the numbers is smaller than some
acceptable epsilon.
Incidentally, if I run
#include