Indeed,
Also Floating point numbers are NOT real numbers, they are approximate points on the real line that we pretend are exact reals but really are a very different geometry all together! :)
If you want to get extra precision in a floating point computation in a way avoids the "discrepancy" when permuting the numbers, the compensated library by Edward Kmett
http://hackage.haskell.org/package/compensated lets you easily double or quadruple the number of bits of precision you get in sums and products, which makes a lot of these problems go way quite nicely!
Floats and Doubles are not exact numbers, dont use them when you expect things to behave "exact". NB that even if you have *exact* numbers, the exact same precision issues will still apply to pretty much any computation thats interesting (ignoring what the definition of interesting is). Try defining things like \ x -> SquareRoot x or \x-> e^x on the rational numbers! Questions of precision still creep in!
So I guess phrased another way, a lot of the confusion / challenge in writing floating point programs lies in understanding the representation, its limits, and the ways in which it will implicitly manage that precision tracking book keeping for you.
Computational mathematics is a really rich and complicated topic! Theres many more (valid and different) constructive models of different classes of numbers than you'd expect!