
Andrew Coppin wrote:
Bryan O'Sullivan wrote:
Andrew Coppin wrote:
On the other hand, this is the anti-theisis of Haskell. We start with a high-level, declarative program, which performs horribly, and end up with a manually hand-optimised blob that's much harder to read but goes way faster.
Buh? This is hard to read?
Look closer: it's hardER to read.
mean xs = sum xs / fromIntegral (length xs)
mean = go 0 0 n where go s l x | x > m = s / fromIntegral l | otherwise = go (s+x) (l+1) (x+1
One version makes it instantly clear, at a glance, what is happening. The other requires you to mentally walk round a look, imperative style, to figure out what's happening. It's not a *big* deal, but it's unfortunate.
I'm more worried about what happens in less trivial examples. [Let's face it, who wants to compute the sum of the numbers from 1 to N?]
Hm, it seems like you're expecting magic, aren't you? Of course the first equation is easier to read, but it's no surprise that this may actually be slower. Like the imperative bubblesort is easier to read than the imperative quicksort but far slower. Put differently, making Haskell as fast as C is easy: just write a slower C program! Namely one that is as easy to read as the Haskell version. If you implement mean xs = sum xs / fromIntegral (length xs) directly in C, I bet you'll be delighted to discover that they perform similarly (using linked lists in the C version). Regards, apfelmus