
andrewcoppin:
Neil Mitchell wrote:
Hi
I agree with Bulat that Haskell has, if anything, even better optimisation potential than something like C. With Haskell you can do the crazy high-level optimisations that things like C would demand really advanced alias-analysis. Compare this to low-level optimisations which in Haskell require strictness analysis but in C are easy. At some point high-level will become more important than low-level, when it does, we win :-)
I had this conversation where Mr C++ basically said that my code implements 3 loops and it's "not possible" to optimise that into just 1 loop like the C program is doing. Then I (or rather, Don) demonstrated that the stream fusion library *has* optimised it into just 2 loops. (Apparently the library isn't 100% complete as yet.)
Right, we haven't bothered with streaming versions of 'words'. However, the maximumBy and map happily fuse into a single loop.
Hmm, perhaps to really show this off, we need a more complicated program. Something that would be just too hard to implement as 1 loop in C, but is easy for the GHC optimiser to build. I shall meditate on this further...
Do you have the single loop C program, btw? I'd be curious to see if this is really "feasible". It would have to do the buffering, tokenising and accumulating in one go. I'd imagine it is a bit hairy. And, it should not significantly outperform, say, a bytestring version. If it does, I'd like to see that. -- Don