
On Tue, 12 Sep 2006, Aaron Denney wrote:
On 2006-09-12, Bryan Burgers
wrote: And another problem I can see is that, for example, the Integers are a group over addition, and also a group over multiplication;
Not over multiplication, no, because there is no inverse.
I know of no good way to express that a given data type obeys the same interface two (or more) ways. Some OO languages try to handle the case of of an abstract base class being inherited twice through two different intermediate classes, but none of them do it well.
Some examples are: Cardinals are a lattice with respect to (min,max) and (gcd,lcm) Sequences are rings if the multiplication is defined as 1) element-wise multiplication 2) convolution We could certainly go a similar way and define newtypes in order to provide different sets of operations for the same data structure. One issue is, that we have some traditional arithmetical signs and want to use them in the traditional way. But there is no simple correspondence between signs and laws. Both "+" and "*" fulfill monoid or group laws depending on the type. If we had a sign for "group operation", say "." we had to write "'.' of the additive group of rationals" instead of "+" and "'.' of the multiplicative group of rationals" instead of "*". I don't know how to handle this in a programming language. We also know that floating point numbers violate most basic laws. But also wrappers to other languages violate basic laws. E.g. if the Haskell expression (a+b) is mapped to an expression of a foreign language, say (add a b), then (b+a) will be mapped to (add b a). That is, this instance of Haskell's (+) is not commutative. The mathematical concept of calling a tuple of a set of objects and some operations a group, a ring or whatever is not exactly mapped to Haskell's type classes. It is even used laxly in mathematics. One often says "the set of integers is a ring".