
Don Stewart
goalieca:
So in a few years time when GHC has matured we can expect performance to be on par with current Clean? So Clean is a good approximation to peak performance?
If I remember the numbers, Clean is pretty close to C for most benchmarks, so I guess it is fair to say it is a good approximation to practical peak performance. Which proves that it is possible to write efficient low-level code in Clean.
And remember usually Haskell is competing against 'high level' languages like python for adoption, where we're 5-500x faster anyway...
Unfortunately, they replaced line counts with bytes of gzip'ed code -- while the former certainly has its problems, I simply cannot imagine what relevance the latter has (beyond hiding extreme amounts of repetitive boilerplate in certain languages). When we compete against Python and its ilk, we do so for programmer productivity first, and performance second. LOC was a nice measure, and encouraged terser and more idiomatic programs than the current crop of performance-tweaked low-level stuff. BTW, Python isn't so bad, performance wise. Much of what I do consists of reading some files, building up some hashes (associative arrays or finite maps, depending on where you come from :-), and generating some output. Python used to do pretty well here compared to Haskell, with rather efficient hashes and text parsing, although I suspect ByteString IO and other optimizations may have changed that now. -k -- If I haven't seen further, it is by standing in the footprints of giants