
Hans Aberg wrote:
This problem is not caused by defining f+g, but by defining numerals as constants.
Yup. So the current (Num thing) is basically: 1. The type thing is a ring 2. ... with signs and absolute values 3. ... along with a natural homomorphism from Z into thing 4. ... and with Eq and Show. If one wanted to be perfectly formally correct, then each of 2-4 could be split out of Num. For example, 2 doesn't make sense for polynomials or n by n square matrices. 4 doesn't make sense for functions. 3 doesn't make sense for square matrices of dimension greater than 1. And, this quirk about 2(x+y) can be seen as an argument for not wanting it in the case of functions, either. I'm not sure I find the argument terribly compelling, but it is there anyway. On the other hand, I have enough time already trying to explain Num, Fractional, Floating, RealFrac, ... to new haskell programmes. I'm not sure it's an advantage if someone must learn the meaning of an additive commutative semigroup in order to understand the type signatures inferred from code that does basic math in Haskell. At least in the U.S., very few computer science students take an algebra course before getting undergraduate degrees.
In mathematical terms, the set of functions is a (math) module ("generalized vectorspace"), not a ring.
Well, I agree that functions are modules; but it's hard to agree that they are not rings. After all, it's not too difficult to verify the ring axioms.