
I provided a Java solution to a problem of returning the first digit of an integer on StackOverflow and someone wondered if there were any floating point point problems with the solution, so I though I'd implement the algorithm in Haskell and run QuickCheck on it. Everything works fine on GHCi, but if I compile the code and run the test, it fails with -1000, 1000, -1000000. Any ideas why? In any case it seems that the commenter was right and there are some subtle problems with my solution. I'd just like to know more details... import Data.Char import Test.QuickCheck -- my solution getFirstDigit :: Int -> Int getFirstDigit 0 = 0 getFirstDigit x = let x' = abs x digits = (floor $ logBase 10 $ fromIntegral x') in x' `div` (floor $ 10 ** (fromIntegral digits)) -- two reference implementation that agree with each other getFirstDigitRef1 :: Int -> Int getFirstDigitRef1 x = digitToInt $ head $ show $ abs x getFirstDigitRef2 :: Int -> Int getFirstDigitRef2 x | x < 0 = getFirstDigitRef2 (-x) | x < 10 = x | otherwise = getFirstDigitRef2 $ x `div` 10 myTest x = getFirstDigit x == getFirstDigitRef1 x --myTest x = getFirstDigitRef2 x == getFirstDigitRef1 x myCheck n = check (defaultConfig { configMaxTest = n }) myTest main = myCheck 1000000000 -- ! Lauri