
2010/1/13 Lauri Pesonen
I provided a Java solution to a problem of returning the first digit of an integer on StackOverflow and someone wondered if there were any floating point point problems with the solution, so I though I'd implement the algorithm in Haskell and run QuickCheck on it. Everything works fine on GHCi, but if I compile the code and run the test, it fails with -1000, 1000, -1000000. Any ideas why?
In any case it seems that the commenter was right and there are some subtle problems with my solution. I'd just like to know more details...
Ok, I've figured out why the compiled version fails the check: main = putStrLn $ show $ logBase 10 1000 when compiled returns 2.9999999999999996 on my system (WinXp 32-bit) which then gets truncated to 2. So it seems that there's a precision difference between the compiled and the interpreted code. 'logBase 10 999' on the other hand produces exactly the same value in both cases. I expect this to be a known issue with floats. Sorry for the noise. -- ! Lauri