
On Thu, Jun 30, 2005 at 02:20:16PM +0200, Henning Thielemann wrote:
On Thu, 30 Jun 2005, David Roundy wrote:
If we support matrix-matrix multiplication, we already automatically support matrix-column-vector and row-vector-matrix multiplication, whether or not we actually intend to, unless you want to forbid the use of 1xn or nx1 matrices. So (provided we *do* want to support matrix-matrix multiplication, and *I* certainly would like that) there is no question whether we'll have octavish/matlabish functionality in terms of row and column vectors--we already have this behavior with a single multiplication representing a single operation.
Of course, I only wanted a separate vector type (which also means a separate matrix-vector multiplication) and I argued against a further discrimination of row and column vectors.
If you want to introduce a more general set of tensor datatypes,
Did someone requested tensor support? At least not me. I used the tensor example to show that MatLab throws them all together with matrices and vectors and I wanted to give an idea how to make it better by separating them.
I disagree. Vectors *are* tensors... and once you've added rank-1 tensors, you'll also be wanting to add support for rank-0 tensors. These are nice objects (and would be nice to support eventually), but they add considerable complexity. There's only one matrix-matrix multiplication, but there are two vector-vector multiplications, two vector-matrix multiplications. All four of these multiplications can be expressed in terms of the single matrix-matrix multiplication. I agree that if we introduce a vector type there's no reason to introduce separate row and column vectors, but I think we can get by quite well for many purposes without introducing the vector type, which adds so much complexity. I guess I've left out the ".*" pointwise multiply, probably because I'm a physicist and am tensor-biased, and it's not a tensor operation in the physics sense (it doesn't behave right under unitary transformations of its arguments).
In short, especially since the folks doing the work (not me) seem to want plain old octave-style matrix operations, it makes sense to actually do that. *Then* someone can implement an ultra-uber-tensor library on top of that, if they like. And I would be interested in a nice tensor library... it's just that matrices need to be the starting point,
Matrices _and_ vectors! Because matrices represent operators on vectors and it is certainly not sensible to support only the operators but not the objects they act on ... Adding a vector type by a library that is build on top of a matrix library seems to me like making the first step after the second one.
No, matrices operate on matrices and return matrices. This is the wonderful thing about matrix arithmetic, why it's unique, and why I'd like to have a library that supports matrix arithemetic. Tensors are also nice, but are much more complicated, since most tensor multiplications change the rank of the tensor--and that complication is precisely what you want. You're taking an interperetation of what you mean by the math and wanting to encode it in the type system. Matrix arithmetic is entirely self-contained, and I'd like to have *that* reflected in the type system. -- David Roundy http://www.darcs.net