
Hi everybody, please have a look at the following lines of code: x :: [Double] x = [2,4,6,8,10] p :: [Double] p = [0.05, 0.2, 0.35, 0.3, 0.1] expectation :: Double expectation = sum $ zipWith (*) x p variance :: Double variance = sum $ zipWith f x p where f i p = (i - expectation)^2 * p when I load this into ghci I get the following: *Main> expectation 6.3999999999999995 *Main> expectation == 6.4 False *Main> variance 4.24 *Main> Why does 'expectation' get evaluated to 6.39999... instead of 6.4?? This is very strange. I have another example which is even more annoying: Have a look at the following code: import qualified Data.Map as M newtype Distribution a = Distribution (M.Map a Double) isDistribution :: Distribution a -> Bool isDistribution (Distribution d) = sum (M.elems d) == 1 uniform :: Ord a => [a] -> Distribution a uniform xs = Distribution (M.fromList (map go xs)) where go i = (i, 1 / fromIntegral (length xs)) When I load this into ghci I get the following: *Probability> isDistribution $ uniform [1..6] False *Probability> (\(Distribution d) -> sum (M.elems d)) $ uniform [1..6] 0.9999999999999999 *Probability> I mean ... hey, what is going on here? Is there any way of getting around this? Cheers, Thomas

Thomas, The strangeness you're experiencing is normal behavior for floating-point (FP) math, which by definition doesn't obey the usual algebraic laws you'd like it to. FP is inexact. FP operations silently round at every turn. Comparison of two FP values for equality is usually a programming error, with a few notable exceptions. Usually what you want is either to compare the difference between two FP values to see if they're within some appropriate tolerance, or to avoid FP altogether. In the latter case, rational numbers may be what you need; Haskell has native types that support exact rational numbers directly, and you can convert to floating point if needed for approximate trig functions or whatever. Regards, John

John Dorsey wrote:
Thomas,
The strangeness you're experiencing is normal behavior for floating-point (FP) math, which by definition doesn't obey the usual algebraic laws you'd like it to.
FP is inexact. FP operations silently round at every turn. Comparison of two FP values for equality is usually a programming error, with a few notable exceptions. Usually what you want is either to compare the difference between two FP values to see if they're within some appropriate tolerance, or to avoid FP altogether. In the latter case, rational numbers may be what you need; Haskell has native types that support exact rational numbers directly, and you can convert to floating point if needed for approximate trig functions or whatever.
Regards, John
Thanks John! Changing Double to Rational in the type declaration solved the problem. Also thanks for pointing out that one should never test flouting point numbers for equality and expect the result to be trustworthy. I really should have thought about this, but I didn't. Cheers, Thomas

On Tue, Jul 21, 2009 at 07:10:40PM -0400, Thomas Friedrich wrote:
I really should have thought about this, but I didn't.
Note that Rational's are really really really slow, probably it would be better to spend some time reading Goldberg's paper[1]. [1] http://docs.sun.com/source/806-3568/ncg_goldberg.html -- Felipe.

On Tue, Jul 21, 2009 at 08:51:05PM -0300, Felipe Lessa wrote:
On Tue, Jul 21, 2009 at 07:10:40PM -0400, Thomas Friedrich wrote:
I really should have thought about this, but I didn't.
Note that Rational's are really really really slow, probably it would be better to spend some time reading Goldberg's paper[1].
Premature optimization is the sqrt of all evil. -Brent

Excerpts from Brent Yorgey's message of Wed Jul 22 04:45:29 +0200 2009:
On Tue, Jul 21, 2009 at 08:51:05PM -0300, Felipe Lessa wrote:
On Tue, Jul 21, 2009 at 07:10:40PM -0400, Thomas Friedrich wrote:
I really should have thought about this, but I didn't.
Note that Rational's are really really really slow, probably it would be better to spend some time reading Goldberg's paper[1].
Premature optimization is the sqrt of all evil.
+1 Rationals can do their jobs really well, even being much slower than Doubles. -- Nicolas Pouillard http://nicolaspouillard.fr

On Tue, Jul 21, 2009 at 10:45:29PM -0400, Brent Yorgey wrote:
On Tue, Jul 21, 2009 at 08:51:05PM -0300, Felipe Lessa wrote:
Note that Rational's are really really really slow, probably it
Premature optimization is the sqrt of all evil.
I agree, however statistics tend to be very math intensive, and even in simple cases you may end up with a big numerator and a big denominator, totally thrashing your performance. He could write all his functions as taking a (Num a), which would let him choose between both, but if he wants to do that then he must learn how to handle the subtleties of Doubles. (Also Rationals don't support a lot of useful functions like sqrt itself or exp.) -- Felipe.
participants (5)
-
Brent Yorgey
-
Felipe Lessa
-
John Dorsey
-
Nicolas Pouillard
-
Thomas Friedrich