
On Thu, Jun 09, 2005 at 01:35:26PM +0200, Henning Thielemann wrote:
On Thu, 9 Jun 2005, Keean Schupke wrote:
- would people want a library like MatLab where matrices are instances of Num/Fractional/Floating and can be used in normal math equations...
I'm pretty unhappy with all these automatisms in MatLab which prevent fast detection of mistakes, e.g. treating vectors as column or row matrices, random collapses of singleton dimensions, automatic extension of singletons to vectors with equal components.
The thing is that all these things are what make matlab so easy to use. Would you really like to have separate multiplication operators for every combination of scalars, vectors and matrices? It may be possible to come up with a better scheme, but personally I'd be very happy with a library that just does what matlab does, i.e. define (*), (/), (+), (-), (.*), (./), and treat every object as a matrix.
What is the natural definition of matrix multiplication? The element-wise multiplication (which is commutative like the multiplication of other Num instances) or the matrix multiplication? I vote for separate functions or infix operators for matrix multiplications, linear equation system solvers, matrix-vector-multiplication, matrix and vector scaling. I wished to have an interface to LAPACK for the floating point types because you can hardly compete with algorithms you write in one afternoon.
I'd tend to say that matlab has a working system that has a number of satisfied users. Inventing a new interface for linear algebra is an interesting problem, but I'd be satisfied with a "matlab with a nice programming language"... a tool for those who just want to use matrices rather than for studying matrices.
- operations like cos/sin/log can be applied to a matrix (and apply to each element like in MatLab) ...
MatLab must provide these automatisms because it doesn't have proper higher functions. I think the Haskell way is to provide a 'map' for matrices. Otherwise the question is: What is the most natural 'exp' on matrices? The elementwise application of 'exp' to each element or the matrix exponentation?
The utility of these functions would depend on whether one makes matrices be instances of floating and *doesn't* implement a separate operator for the scalar-matrix operations. If that is the case, I think we'd want these functions defined you could do something like let a = sqrt(2.0)*b On the other hand, this argument only says that I would like these to be defined for 1x1 matrices, and doesn't specify what they should do to bigger matrices. Of course, the trancendental functions are only defined (except in an element-wise sense) for square matrices... But it would be an interesting trick to allow exponentiation of a matrix (presumably by diagonalizing, exponentiating the eigenvalues and then applying the unitary transformation back again). And we could afford that, since element-wise exp could always be done with (map exp a)... :) -- David Roundy http://www.darcs.net