
Dylan Thurston wrote:
I've started writing up a more concrete proposal for what I'd like the Prelude to look like in terms of numeric classes. Please find it attached below. It's still a draft and rather incomplete, but please let me know any comments, questions, or suggestions.
This is a good basis for discussion, and it helps to see something concrete. Here are a few comments:
Thus these laws should be interpreted as guidelines rather than absolute rules. In particular, the compiler is not allowed to use them. Unless stated otherwise, default definitions should also be taken as laws.
Including laws was discussed very early in the development of the language, but was rejected. IIRC Miranda had them. The argument against laws was that their presence might mislead users into the assumption that they did hold, yet if they were not enforcable then they might not hold and that could have serious consequences. Also, some laws do not hold in domains with bottom, e.g. a + (negate a) === 0 is only true if a is not bottom.
class (Additive a) => Num a where (*) :: a -> a -> a one :: a fromInteger :: Integer -> a
-- Minimal definition: (*), one fromInteger 0 = zero fromInteger n | n < 0 = negate (fromInteger (-n)) fromInteger n | n > 0 = reduceRepeat (+) one n
This definition requires both Eq and Ord!!! As does this one:
class (Num a, Additive b) => Powerful a b where (^) :: a -> b -> a instance (Num a) => Powerful a (Positive Integer) where a ^ 0 = one a ^ n = reduceRepeated (*) a n instance (Fractional a) => Powerful a Integer where a ^ n | n < 0 = recip (a ^ (negate n)) a ^ n = a ^ (positive n)
and several others further down.
(4) In some cases, the hierarchy is not finely-grained enough: operations that are often defined independently are lumped together. For instance, in a financial application one might want a type "Dollar", or in a graphics application one might want a type "Vector". It is reasonable to add two Vectors or Dollars, but not, in general, reasonable to multiply them. But the programmer is currently forced to define a method for (*) when she defines a method for (+).
Why do you stop at allowing addition on Dollars and not include multiplication by a scalar? Division is also readily defined on Dollar values, with a scalar result, but this, too, is not available in the proposal. Having Units as types, with the idea of preventing adding Apples to Oranges, or Dollars to Roubles, is a venerable idea, but is not in widespread use in actual programming languages. Why not? Vectors, too, can be multiplied, producing both scalar- and vector-products. It seems that you are content with going as far as the proposal permits, though you cannot define, even within the revised Class system, all the common and useful operations on these types. This is the same situation as with Haskell as it stands. The question is whether the (IMHO) marginal increase in flexibility is worth the cost. This is not an argument for not separating Additive from Num, but it does weaken the argument for doing it. --brian