
On 13/02/06, Jeff.Harper@handheld.com
Hi,
I've posted a couple messages to the Haskell Cafe in the last few months. I'm new to Haskell. But, I've set out to implement my own vectors, matrices, complex numbers, etc.
One goal I have, is to overload operators to work with my new types. The pursuit of this goal, has pushed me to learn a lot about the Haskell type system. When I get stuck from time-to-time, the kind folks on this list have pointed me in the right direction.
I'm stuck now. One thing I want to avoid is adding new multiplication operators to handle multiplication of dissimilar types. For instance, I'd like to be able to have an expression like k * m where k is a Double and m is a Matrix. This doesn't work with the prelude's (*) operator because the prelude's (*) has signature:
(*) :: (Num a) => a -> a -> a.
To get around this, I wrote my own versions of a Multiply class that allows dissimilar types to be multiplied. You can see my Multiply class in the module at the end of this Email.
At the bottom of the module, I've attempted to implement multiplication of the forms:
scalar * matrix matrix * scalar matrix * matrix
The problem is that when I try to do matrix * matrix at the interpreter, I get an error message from Glaskgow:
*My_matrix> m1 * m2
<interactive>:1:3: Overlapping instances for Multiply (Matrix Double) (Matrix Double) (Matrix c) arising from use of `*' at <interactive>:1:3 Matching instances: My_matrix.hs:63:0: instance (Multiply a b c, Add c c c, Num a, Num b, Num c) => Multiply (Matrix a) (Matrix b) (Matrix c) My_matrix.hs:57:0: instance (Multiply a b c, Num a, Num b, Num c) => Multiply (Matrix a) b (Matrix c) My_matrix.hs:51:0: instance (Multiply a b c, Num a, Num b, Num c) => Multiply a (Matrix b) (Matrix c) In the definition of `it': it = m1 * m2
I don't understand how m1 * m2 can match the scalar multiplication instances. For instance, the scalar * matrix instance has signature:
instance (Multiply a b c, Num a, Num b, Num c) => Multiply a (Matrix b) (Matrix c) where
m1 in my expression would correspond to the 'a' type variable. But, 'a' is constrained to be a Num. However, I never made my Matrix type an instance of Num.
Whether or not you actually make it an instance of Num doesn't matter, since nothing prevents a future module from doing so.
Is there a work around for this? In my first implementation, I did not have the Num constraints in the matrix Multiply instances. I added the Num constraints specifically, to remove the ambiguity of the overlapping instance. Why didn't this work?
Probably the simplest thing to do would be to require that when multiplying a scalar with a vector or a matrix, the base field/ring is fixed. This is slightly less general, but automatic conversions between scalar numeric types are a total mess no matter which way you slice it. This way, you'd have: ----- {-# OPTIONS -fglasgow-exts #-} module My_matrix where import qualified Prelude as P import Prelude hiding ( (*), (+), (-), negate) default ( ) class Add a b c | a b -> c where (+) :: a -> b -> c class Multiply a b c | a b -> c where (*) :: a -> b -> c infixl 7 * infixl 6 + data Matrix a = Matrix [[a]] deriving Show instance (Num a) => Multiply a (Matrix a) (Matrix a) where x * (Matrix yss) = Matrix (map (map (x P.*)) yss) instance (Num a) => Multiply (Matrix a) a (Matrix a) where (Matrix xss) * y = Matrix (map (map (P.* y)) xss) instance (Num a) => Multiply (Matrix a) (Matrix a) (Matrix a) where (Matrix xss) * (Matrix yss) = Matrix [[sum (zipWith (P.*) xs ys) | xs <- xss] | ys <- yss] ---- However, this route still leads to troublesome issues. If you want to extend (*) to scalar-by-scalar multiplication, you'll need an instance like: instance Num a => Multiply a a a where x * y = x P.* y but this is not normally allowed, as there are no type constructors around at all to help it decide that it should choose this instance. If you turn on -fallow-undecidable-instances, it will compile, but you'll get overlapping instance problems again when trying to multiply matrices. (Remember, just because there's no instance of Num right now, doesn't mean that there couldn't be one later on, and the decision as to which code to use has to be made within the module.) So what's the solution? Either use different multiplication operator symbols for matrix and scalar multiplication, or explicitly wrap scalars in a newtype which makes the instances apply. For example, you could write: newtype Scalar a = Scalar a deriving (Eq, Ord, Num) -- etc. and use the type (Scalar a) where you used a before in the instance declarations. Personally, I like the approach which uses different operator symbols. I used the convention that placing a dot next to an operator made it act on a matrix on that side. (e.g. (*.) :: (Num a) => a -> Matrix a -> Matrix a) It's a little less pretty than you might want, but it completely avoids all the difficult ambiguities with things being so highly overloaded. Another option is to define an instance of Num for matrices, where you define fromInteger to give the identity matrix scalar multiplied with the given integer. You still need something special for Matrix-by-Vector applications, but that's another thing. In mathematics, we overload multiplication to death because as human readers, we know what is meant by context. When programming, that context is harder to provide (a lot of the general possibilities for what you're trying involve adding lots of extra type signatures on specific elements). Also, even when it can be provided, practical issues can get in the way. Haskell doesn't have a way to say or know that (Matrix a) will never be an instance of Num, so there's no way to make it clear which instance of Multiply is to be used when compiling the module. For things like multiplying Integers with Doubles, it's important to consider loss-of-information. Neither type there is representable as a subset of the other, so it's hard to say what to coerce to what. The functional dependency helps, so you can specify a convention and allow the user to only specify all of the input types, but it's still not so pretty. - Cale