
On 10 September 2013 19:06, Brandon Allbery
But practically, floating point is the main place where things go south; and yes, it's generally hated in the functional programming community: no matter what you do, real numbers are going to be a pain, and the standard compromise known as floating point is especially frustrating because its behavior can't be captured in a simple functional description. But floating point is what CPUs use and have specific support for.
From my perspective getting floating point computation right is hard. Proving (or at least reasoning) that an algorithm based on floating
Initial disclaimer: I'm a total Haskell and functional programming noob. However I'm well versed in a number of other languages and in floating point and expend a substantial amount of time focussing on numerical accuracy. What do you mean when you say that floating point can't be capture in a simple functional description? Leaving aside FPU inconsistencies, hardware failure etc. and assuming some sensible arithmetic (e.g. IEEE-754) the result of floatadd(A, B) is fully defined for all inputs A and B. How is that inconsistent with functional programming? I'd like to understand more about this as when I'm better at Haskell I expect to use it for at least some floating point computation, point computation works or has some accuracy is non-trivial and requires knowledge about the sequence of elementary floating point operations. I sympathise with the OP in that I consider any compiler "optimisation" that changes the underlying FPU operations in such a way that the end result differs should be considered a bug rather than an optimisation (with the exception of constant folding). But as I said I'm a total Haskell noob, so I don't yet understand what compiler optimisations in Haskell really mean or, more importantly, how easily they can be controlled. To the OP, if I understand what you're doing correctly then float (or any binary radix floating point type) is absolutely the wrong thing to use. If you want to convert from an exact decimal-string representation of a number to an exact fixed-point integer representation you should use exact computation in all stages or at least use a decimal floating point format (I don't know yet if Haskell has one). Any conversion to binary-float as an intermediary format risks a (very likely) inexact conversion since binary floating point format can only represent rational numbers whose denominator (in lowest terms) is a power of 2. In your case 0.12 is the rational number 3/25 and 25 is not a power of 2. Consequently when you attempt to store 0.12 in a binary float you will be getting a float whose exact value is not 0.12. I don't know how to conveniently do this in Haskell but in Python we can easily find the exact decimal representation of the nearest 64-bit floating point value to 0.12: $ py -3.3 -c 'from decimal import Decimal; print(Decimal(0.12))' 0.11999999999999999555910790149937383830547332763671875 Multiplying the binary float using the FPU gives us the exact value you wanted: $ py -3.3 -c 'from decimal import Decimal; print(Decimal(0.12*100))' 12 However the true result of multiplying the nearest binary float to 0.12 by 100 is still slightly less than 12. So if the combined multiply-by-100-and-truncate operation is performed correctly then you should get 11. The appropriate parsing algorithm is straight-forward although I don't yet know how to do this in Haskell. Split the string on the '.' and parse both sides as integers. Reject the string if there are not two digits on the RHS. Then you can multiply the LHS by 100 and add the RHS. (It's slightly more complex if you need to do negative numbers as well). It sounds like the Rational type already deals with all this parsing complexity for you in which case you should use that. Oscar