
I am getting a bit worried about the usability of Haskell for numerical work. The Haskell 98 report states that floating literals are represented as a conversion from Rational, which means that the literal is first converted to a Rational. I can't find anything in the Haskell report that states how this conversion should take place, and to what precision it should be correct. It could be made correct to any precision as Rationals are represented using Integers, but at least Hugs doesn't do that. By experimenting with some particular cases, I found that its internal Rational representation is even less accurate than the precision of Double permits, which means that it is impossible to specify literals to the full precision of Double. GHC behaved fine in my tests. But what can I safely assume from a Haskell implementation? Konrad.