Patrick Perry's BLAS package

Hello all, I have just noticed that yesterday this fantastic package has been uploaded to hackage: http://hackage.haskell.org/cgi-bin/hackage-scripts/package/blas-0.4 We finally have a high quality library for numeric linear algebra. This is very good news for the Haskell community. Patrick, many thanks for your excellent work. Do you have similar plans for LAPACK? Alberto

Wow, thanks for noticing, Alberto! For anyone interested, I put up a formal announcement describing the bindings a little bit here: http://quantile95.com/ I just registered the domain yesterday, so it may take a few days to resolve the DNS magic. Here's the text of the announcement: I’m really happy that people seem to be interested in the library. Alberto, in particular, is the primary author of hmatrix, another haskell linear algebra library (which I stole a few ideas from), so if he endorses it, that means a lot to me. So, Yet Another Linear Algebra Library? I’ve already mentioned hmatrix. There’s also another one called HBlas. Why would anyone want a third? Here are my reasons: * Support for both immutable and mutable types. Haskell tries to make you use immutable types as much as possible, and indeed there is a very good reason for this, but sometimes you have a 100MB matrix, and it just isn’t very practical to make a copy of it every time you modify it. hmatrix only supports immutable types, and HBlas only supports mutable ones. I wanted both. * Access control via phantom types. When you have immutable and mutable types, it’s very annoying to have separate functions for each type. Do I want to have to call “numCols” for immutable matrices and “getNumCols” for mutable ones, even though both functions are pure, and both do exactly the same thing? No. If I want to add an immutable matrix to a mutable one, to I want to first call “unsafeThaw” on the immutable one to cast it to be mutable? No. With the phantom type trick, you can get around this insanity. Jane Street Capital has a very good description of how this works. * Phantom types for matrix and vector shapes. This is a trick I learned from darcs. It means that the compiler can catch many dimension-mismatch mistakes. So, for instance, a function like foo :: (BLAS1 e) => Matrix (m,n) e -> Matrix (n,k) e -> Int -> Vector m e foo a b i = let x = row b i in a <*> x will not type-check. (”<*>” is the function to multiply a matrix by a vector. Everything is ok if you replace “row” by “col”.) This feature has caught a few bugs in my code. * Taking the conjugate transpose (”herm”) of a matrix is an O(1) operation. This is similar to hmatrix, where taking the transpose is O(1). As BLAS and LAPACK (mostly) support this, it makes no sense to copy a matrix just to work with the conjugate transpose. Why conjugate transpose instead of just transpose? Because the former is a far more common operation. This is why the “‘” operator in MATLAB is conjugate transpose. The drawback for this feature is that BLAS and LAPACK do not support it everywhere. In particular, QR decomposition with pivoting is going to be a huge pain in the ass to support for herm-ed matrices. * Support for triangular and hermitian views of matrices. This is a feature of BLAS that no one seems to support (not even MATLAB). In addition to the “Matrix” type, there are “Tri Matrix” and “Herm Matrix” types that only refer to the upper- or lower-triangular part of the matrix. Hopefully the features above are compelling enough to make people want to use the library. These bindings have been a lot of work. For me to come up with the feature list above, I’ve already gone through a few iterations of dramatic re-writes (hence the version number). Of course, I always welcome suggestions for how to make it better. What’s next? In the immediate future, I plan to add banded matrices. I’ve already written a good chunk of code for this, but it isn’t very well tested, so I decided to leave it out of the release. I’m also going to add permutation matrices. I don’t have plans to add support for packed triangular matrices, but if someone else wanted to do that, I would be happy to include it. The same goes for symmetric complex matrices. LAPACK support is on the horizon, but that may take awhile. Also, I probably won’t do more than SVD, QR, and Cholesky, since those are all I need. Expect a preliminary announcement by the end of the summer. This work would not have been possible without looking at the other excellent linear algebra libraries out there. In particular the GNU Scientific Library was the basis for much of the design. I also drew inspiration from hmatrix and the haskell array libraries. Please let me know if you have any success in using the library, and if you have any suggestions for how to make it better. Patrick On Jun 6, 2008, at 5:36 AM, Alberto Ruiz wrote:
Hello all,
I have just noticed that yesterday this fantastic package has been uploaded to hackage:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/blas-0.4
We finally have a high quality library for numeric linear algebra. This is very good news for the Haskell community.
Patrick, many thanks for your excellent work. Do you have similar plans for LAPACK?
Alberto

(Sorry, Patrick. I forgot to CC haskell-cafe.)
Salute! Excellent!
Patrick Perry
Wow, thanks for noticing, Alberto! For anyone interested, I put up a formal announcement describing the bindings a little bit here:
I just registered the domain yesterday, so it may take a few days to resolve the DNS magic. Here's the text of the announcement:
The magic is working, now.
I’m really happy that people seem to be interested in the library. Alberto, in particular, is the primary author of hmatrix, another haskell linear algebra library (which I stole a few ideas from), so if he endorses it, that means a lot to me.
So, Yet Another Linear Algebra Library? I’ve already mentioned hmatrix. There’s also another one called HBlas. Why would anyone want a third? Here are my reasons:
* Support for both immutable and mutable types. Haskell tries to make you use immutable types as much as possible, and indeed there is a very good reason for this, but sometimes you have a 100MB matrix, and it just isn’t very practical to make a copy of it every time you modify it. hmatrix only supports immutable types, and HBlas only supports mutable ones. I wanted both.
I didn't use hmatrix a lot, because I wrote some STUArray things and I wasn't sure what to do with that. However, I just noticed that there is a bunch of ST growing under Data.Packed in hmatrix. Guess things is going to change, soon. And perhaps with your work, Alberto doesn't need to reinvent the wheel anymore.
* Access control via phantom types. When you have immutable and mutable types, it’s very annoying to have separate functions for each type. Do I want to have to call “numCols” for immutable matrices and “getNumCols” for mutable ones, even though both functions are pure, and both do exactly the same thing? No. If I want to add an immutable matrix to a mutable one, to I want to first call “unsafeThaw” on the immutable one to cast it to be mutable? No. With the phantom type trick, you can get around this insanity. Jane Street Capital has a very good description of how this works.
Lovely!
* Phantom types for matrix and vector shapes. This is a trick I learned from darcs. It means that the compiler can catch many dimension-mismatch mistakes. So, for instance, a function like foo :: (BLAS1 e) => Matrix (m,n) e -> Matrix (n,k) e -> Int -> Vector m e foo a b i = let x = row b i in a <*> x will not type-check. (”<*>” is the function to multiply a matrix by a vector. Everything is ok if you replace “row” by “col”.) This feature has caught a few bugs in my code.
If I understand this correctly, the compiler can catch dimension mismatches so that using `col' will result in a compilation error when m and k are different, is it so?
LAPACK support is on the horizon, but that may take awhile. Also, I probably won’t do more than SVD, QR, and Cholesky, since those are all I need. Expect a preliminary announcement by the end of the summer.
This work would not have been possible without looking at the other excellent linear algebra libraries out there. In particular the GNU Scientific Library was the basis for much of the design. I also drew inspiration from hmatrix and the haskell array libraries.
Please let me know if you have any success in using the library, and if you have any suggestions for how to make it better.
I haven't look through the code, yet. But it looks like there are certain levels of similarities between blas and hmatrix. Is it possible for these two libraries to cooperate well with each other? (I didn't look at HBlas, so can't say much about that.) Apart from some warnings, the library compiles fine in my system. But there is a minor issue about the library it links against when `./Setup test'. I need to use `-lcblas' instead of `-lblas' to get it to link to correct libraries. I don't know other people's system. But in my system, Gentoo Linux, I use blas library provided by atlas, and libblas.so is a fortran library and libcblas.so is for C. All in all, good job. Thanks. Xiao-Yong -- c/* __o/* <\ * (__ */\ <

a function like foo :: (BLAS1 e) => Matrix (m,n) e -> Matrix (n,k) e -> Int -> Vector m e foo a b i = let x = row b i in a <*> x will not type-check. (”<*>” is the function to multiply a matrix by a vector. Everything is ok if you replace “row” by “col”.) This feature has caught a few bugs in my code.
If I understand this correctly, the compiler can catch dimension mismatches so that using `col' will result in a compilation error when m and k are different, is it so?
Yes, the compiler infers the type of expression to be Matrix(m,n) e -> Matrix (l,m) e -> Int -> Vector n e which is different from the declared type. The error will be something like "Expected type Vector m e, but got type Vector a1 e instead".
I haven't look through the code, yet. But it looks like there are certain levels of similarities between blas and hmatrix. Is it possible for these two libraries to cooperate well with each other? (I didn't look at HBlas, so can't say much about that.)
This is more work than you might think. The data structures are different, many of the function names are different, and the module namespaces overlap. I wouldn't recommend any switch from hmatrix to blas right now unless you really need mutable types-- hmatrix has a lot more linear algebra functionality.
Apart from some warnings, the library compiles fine in my system. But there is a minor issue about the library it links against when `./Setup test'. I need to use `-lcblas' instead of `-lblas' to get it to link to correct libraries. I don't know other people's system. But in my system, Gentoo Linux, I use blas library provided by atlas, and libblas.so is a fortran library and libcblas.so is for C.
I don't know of a good way to get around this. It seems like every system has a different convention for the location and name of the cblas libraries. So, everyone has to edit the "extra-libraries" and the "extra-lib-dirs" field in the blas.cabal file. Does anyone know of a better way of doing this? Patrick

Patrick Perry wrote:
Xiao-Yong Jin wrote:
Apart from some warnings, the library compiles fine in my system. But there is a minor issue about the library it links against when `./Setup test'. I need to use `-lcblas' instead of `-lblas' to get it to link to correct libraries. I don't know other people's system. But in my system, Gentoo Linux, I use blas library provided by atlas, and libblas.so is a fortran library and libcblas.so is for C.
I don't know of a good way to get around this. It seems like every system has a different convention for the location and name of the cblas libraries. So, everyone has to edit the "extra-libraries" and the "extra-lib-dirs" field in the blas.cabal file. Does anyone know of a better way of doing this?
A possible solution is using flags in the cabal configuration file. For instance, I have this in hmatrix.cabal: if flag(mkl) if arch(x86_64) extra-libraries: gsl mkl_lapack mkl_intel_lp64 mkl_sequential mkl_core else extra-libraries: gsl mkl_lapack mkl_intel mkl_sequential mkl_core else extra-libraries: gsl blas lapack so if I want to link with Intel's MKL optimized blas/lapack instead of ATLAS I simply add the -fmkl flag: runhaskell Setup.lhs configure -fmkl etc. or even cabal install hmatrix -fmkl Other flags can be added to support different distributions. We could have something like cabal install package -ffedora or cabal install package -fcblas etc. Alberto

Xiao-Yong Jin wrote:
Salute! Excellent!
Patrick Perry
writes: * Support for both immutable and mutable types. Haskell tries to make you use immutable types as much as possible, and indeed there is a very good reason for this, but sometimes you have a 100MB matrix, and it just isn’t very practical to make a copy of it every time you modify it. hmatrix only supports immutable types, and HBlas only supports mutable ones. I wanted both.
I didn't use hmatrix a lot, because I wrote some STUArray things and I wasn't sure what to do with that. However, I just noticed that there is a bunch of ST growing under Data.Packed in hmatrix. Guess things is going to change, soon. And perhaps with your work, Alberto doesn't need to reinvent the wheel anymore.
Sure, in the future the internal modules of hmatrix (if not all of them) can be replaced by Patrick's blas and the future lapack. And several alternative high level interfaces may coexist as thin layers on the underlying computational engine with different goals: from very simple interfaces for causal usage to advanced ones for more complex applications.
a function like foo :: (BLAS1 e) => Matrix (m,n) e -> Matrix (n,k) e -> Int -> Vector m e foo a b i = let x = row b i in a <*> x will not type-check. (”<*>” is the function to multiply a matrix by a vector. Everything is ok if you replace “row” by “col”.) This feature has caught a few bugs in my code.
If I understand this correctly, the compiler can catch dimension mismatches so that using `col' will result in a compilation error when m and k are different, is it so?
Yes, the compiler infers the type of expression to be
Matrix(m,n) e -> Matrix (l,m) e -> Int -> Vector n e
which is different from the declared type. The error will be something like "Expected type Vector m e, but got type Vector a1 e instead".
And if I understand it correctly, usage is also very practical. You can define something like listVector 3 [1,2,3::Double] and the dimension type is "free". But if you give a signature you get static dimension checking: listVector 3 [1,2,3] Vector TypeDim3 Double I like this approach very much. Which types do you recommend for the dimensions? User friendly static dimension checking must be available in any serious Haskell linear algebra library. Frederik Eaton also made very interesting work on this in his index aware library.
I haven't look through the code, yet. But it looks like there are certain levels of similarities between blas and hmatrix. Is it possible for these two libraries to cooperate well with each other? (I didn't look at HBlas, so can't say much about that.)
This is more work than you might think. The data structures are different, many of the function names are different, and the module namespaces overlap. I wouldn't recommend any switch from hmatrix to blas right now unless you really need mutable types-- hmatrix has a lot more linear algebra functionality.
Apart from some warnings, the library compiles fine in my system. But there is a minor issue about the library it links against when `./Setup test'. I need to use `-lcblas' instead of `-lblas' to get it to link to correct libraries. I don't know other people's system. But in my system, Gentoo Linux, I use blas library provided by atlas, and libblas.so is a fortran library and libcblas.so is for C.
I don't know of a good way to get around this. It seems like every system has a different convention for the location and name of the cblas libraries. So, everyone has to edit the "extra-libraries" and the "extra-lib-dirs" field in the blas.cabal file. Does anyone know of a better way of doing this?
(sorry if you receive this again, a previous message seems to be lost) A possible solution is using flags in the cabal configuration file. For instance, I have this in hmatrix.cabal: if flag(mkl) if arch(x86_64) extra-libraries: gsl mkl_lapack mkl_intel_lp64 mkl_sequential mkl_core else extra-libraries: gsl mkl_lapack mkl_intel mkl_sequential mkl_core else extra-libraries: gsl blas lapack so if I want to link with Intel's MKL optimized blas/lapack instead of ATLAS I simply add the -fmkl flag: runhaskell Setup.lhs configure -fmkl etc. or even cabal install hmatrix -fmkl Other flags can be added to support different distributions. We could have something like cabal install package -ffedora or cabal install package -fcblas etc. Alberto

2008/6/6 Patrick Perry
Apart from some warnings, the library compiles fine in my system. But there is a minor issue about the library it links against when `./Setup test'. I need to use `-lcblas' instead of `-lblas' to get it to link to correct libraries. I don't know other people's system. But in my system, Gentoo Linux, I use blas library provided by atlas, and libblas.so is a fortran library and libcblas.so is for C.
I don't know of a good way to get around this. It seems like every system has a different convention for the location and name of the cblas libraries. So, everyone has to edit the "extra-libraries" and the "extra-lib-dirs" field in the blas.cabal file. Does anyone know of a better way of doing this?
My preference is to use an autoconf script to solve that problem. ("build-type: Configure" in the cabal file.) For example, the editline and readline C libraries require termcap, which may be linked using one of -lcurses, -ltermcap, etc. The Haskell packages have a configure script which checks which of curses/termcap is available and outputs an editline.buildinfo file with the extra-libraries field filled in. That file is then used automatically by cabal in the build and install phases. See for example: http://code.haskell.org/editline/editline.cabal http://code.haskell.org/editline/configure.ac http://code.haskell.org/editline/editline.buildinfo.in There's also more information is in the Cabal manual: http://haskell.org/cabal/release/latest/doc/users-guide/x30.html#system-depe... Best, -Judah

Judah Jacobson wrote:
My preference is to use an autoconf script to solve that problem. ("build-type: Configure" in the cabal file.)
That approach would not work well for BLAS. The various BLAS libraries have profoundly different performance characteristics, and you wouldn't want to get the wrong one for your system, if you had both installed.

"Bryan O'Sullivan"
Judah Jacobson wrote:
My preference is to use an autoconf script to solve that problem. ("build-type: Configure" in the cabal file.)
That approach would not work well for BLAS. The various BLAS libraries have profoundly different performance characteristics, and you wouldn't want to get the wrong one for your system, if you had both installed.
Is the blas library linked statically? It looks to me that the binary is dynamically linked. linux-vdso.so.1 => (0x00007fff003fe000) libgsl.so.0 => /usr/lib/libgsl.so.0 (0x00002af4aa8cc000) libblas.so.0 => /usr/lib/libblas.so.0 (0x00002af4aac91000) liblapack.so.0 => /usr/lib/liblapack.so.0 (0x00002af4aaeb0000) libutil.so.1 => /lib/libutil.so.1 (0x00002af4ab612000) libdl.so.2 => /lib/libdl.so.2 (0x00002af4ab815000) libm.so.6 => /lib/libm.so.6 (0x00002af4aba19000) libgmp.so.3 => /usr/lib/libgmp.so.3 (0x00002af4abc9b000) librt.so.1 => /lib/librt.so.1 (0x00002af4abedb000) libc.so.6 => /lib/libc.so.6 (0x00002af4ac0e4000) libgslcblas.so.0 => /usr/lib/libgslcblas.so.0 (0x00002af4ac424000) libgfortran.so.1 => /usr/lib/gcc/x86_64-pc-linux-gnu/4.1.2/libgfortran.so.1 (0x00002af4ac654000) libatlas.so.0 => /usr/lib/libatlas.so.0 (0x00002af4ac8eb000) libpthread.so.0 => /lib/libpthread.so.0 (0x00002af4ad123000) libcblas.so.0 => /usr/lib/libcblas.so.0 (0x00002af4ad33e000) libgcc_s.so.1 => /lib/libgcc_s.so.1 (0x00002af4ad55d000) /lib64/ld-linux-x86-64.so.2 (0x00002af4aa6b0000) And presumably the various BLAS implementations share identical interface. You can still change the library at run-time with LD_LIBRARY_PATH. Or am I missing something? Xiao-Yong -- c/* __o/* <\ * (__ */\ <
participants (5)
-
Alberto Ruiz
-
Bryan O'Sullivan
-
Judah Jacobson
-
Patrick Perry
-
Xiao-Yong Jin