
It's a question of what the right default is - safety or performance. In the case of floating point numbers, I'm leaning towards performance.
I quite agree. Currently the standard prelude has default definition: ... compare x y | x == y = EQ | x <= y = LT | otherwise = GT I'd suggest compare x y | x == y = EQ | x <= y = LT | x >= y = GT | otherwise = error "violation of the law of the excluded middle" or even the most symmetric compare x y | x < y = LT | x == y = EQ | x > y = GT | otherwise = error "no consistent ordering" It is not clear to me that this would cause a measurable performance hit in the case of floating point numbers. We're talking about at most two extra instructions: a compare and a conditional branch. The operands are already in registers, and scheduling considerations make it quite likely that the extra instructions could be put into otherwise unoccupied slots. For datatypes like Int or Integer or Char where the compiler should know that the law of the excluded middle holds, there should be zero overhead.
I was thinking of this:
data T = T Double deriving ( Eq, Ord )
... GHC basically produces
instance Ord T where compare (T x) (T y) = compare x y t < u = compare t u == LT
That is indeed what it does. Which is a plain old bug, since it leads to inconsistent behaviour between wrapped vs unwrapped values. *Main> T (0/0) == T (0/0) False *Main> T (0/0) < T (0/0) False *Main> T (0/0) > T (0/0) True *Main> (0/0) > (0/0) False GHC should instead basically produce ... (T x) < (T y) = x < y etc. --Barak.