
2) It is performant (mostly). At least it outperforms other Haskell IO methods I have tried. My 'wc' is about twice as fast as the current shootout version in informal tests (the shootout code is included in the repo). My md5 can sum somewhere between 2-4Mb/Sec on my hardware.
You know that http://www.bagley.org/~doug/shootout/ is frozen, don't you? For a current version look at http://shootout.alioth.debian.org/ The current version is fast but ugly. There was some comitee work on Haskell mailing lists to make it prettier, but it didn't make to the shootout yet.
Thanks, I do have an old version; it wasn't on bagley.org, but I'm not sure exactly where I found it. I'll compare against the newest version when I get home. The reason I compared to the shootout program was to get a sense of how well the API I was developing stacked up against hand-optimized haskell. So, even getting pretty close is a win as far as I'm concerned. (... quick google ...) I just found Ian's md5 implementation. I'll compare to that as well when I get a chance.
BTW, do we care about such benchmarks? I am going to have some spare time and I could work on Haskell solutions a bit, but I'm not sure it's worth the hassle.
I think they are interesting as an indication of where haskell and GHC in particular are weak. If the techniques developed for optimizing shootout scripts can drive better optimizations or new, better libraries, I think that's worthwhile. OTOH, nobody asks if perl golf (for example) is worthwhile, they just do it for kicks (as far as I can tell).