
On Tue, 5 Jul 2005, Michael Walter wrote:
On 7/5/05, Henning Thielemann
wrote: The example, again: If you write some common expression like
transpose x * a * x
then both the human reader and the compiler don't know whether x is a "true" matrix or if it simulates a column or a row vector.
But since a, by definition (question), is a 1xN matrix, what's the ambiguity exactly?
If you want this definition then you must also interpret any 1x1 matrix as a real. That's what I wanted to show with my example, that's the way MatLab works and why it sucks. Multiplication of reals is commutative, reals are naturally totally ordered and so on, matrices (including 1x1 matrices) don't have these properties. Since it is sensible to work with one dimensional vectors it is also sensible to work with 1x1 matrices. But 1x1 matrices of this kind are certainly different from 1x1 matrices produced by transpose x * a * x. A 1x1 matrix in MatLab can thus mean a scalar, a row 1-vector, a column 1-vector or a 1x1-matrix. (If you accept the differences between these terms.) Alternatively you could convert the expected 1x1-matrix into a real which must be checked at run time. Theoretically vectors are objects which can be scaled and added and matrices represent linear operators on vectors. (Operators including linear operators may build a vector space itself, but that's a different issue.) Why should we convert each vector into the representation of some linear operator before doing linear algebra and why should we convert this representation of a linear operator back to the vector after linear algebra has happened? Would you load an audio signal into a 1xN-matrix or into a N-vector? Loading it into a 1xN-matrix forces you to check dynamically and repeatedly if the matrix has really only row. Btw. I would also load an image into a vector, but a vector with a two-dimensional index set.