
Xiao-Yong Jin wrote:
Tomas Andersson
writes: You can never go wrong with a good old fashioned hand written tail recursion when you're in doubt, they are pretty much the closest thing to for-loops there is in haskell and should be easy to grok for Imperative programmers and usually produce really fast code.
sum vect = let d = dim vect in sum' (d - 1) 0 where sum' 0 s = s + (vect @> 0) sum' index s = sum' (index - 1) (s + (vect @> index))
Do I need the strict version of sum'? Put something like
sum' a b | a `seq` b `seq` False = undefined
Or ghc will optimize it automatically? I always don't know when such optimization is useful.
I don't know the hmatrix api very well, but if there is a function for computing the inner product between two vectors you could always do something like the following meta code:
sum v = innerProduct v <1,1,1,1,1,1>
I just doubt the efficiency here. If v is a very large vector, I guess the allocation time of that intermediate vector [1,1..] is not small. But it is just my guess.
My experience is that Haskell allocation time is very fast and usually negligible in most non trivial matrix computations. A good thing about sum v = constant 1 (dim v) <.> v is that a constant vector is efficiently created internally (not from an intermediate Haskell list), and the inner product will be computed by the possibly optimized blas version available in your machine. You can also write simple definitions like the next one for the average of the rows of a matrix as a simple vector-matrix product: mean m = w <> m where w = constant k r r = rows m k = 1 / fromIntegral r I prefer high level "index free" matrix operations to C-like code. Definitions are clearer, have less bugs, and are also more efficient if you use optimized numerical libs. In any case, many algorithms can be solved by the recursive approach described by Tomas. Alberto
Xiao-Yong