
Eugene Kirpichov
2009/12/29 Will Ness
: Daniel Fischer
writes: Am Dienstag 29 Dezember 2009 04:38:21 schrieb Will Ness:
Now _this_, when tested as interpreted code in GHCi, runs about 2.5x times faster than Priority Queue based code from Melissa O'Neill's ZIP package mentioned at the haskellwiki/Prime_Numbers page, with about half used memory reported, in producing 10,000 to 300,000 primes.
It is faster than BayerPrimes.hs from the ZIP package too, in the tested range, at about 35 lines of code in total.
That's nice. However, the important criterion is how compiled code (-O2)
fares. Do the relations continue to hold? How does it compare to a bitsieve?
Haven't gotten to that part yet. :)
But why is it more important? Would that not tell us more about the compiler performance than the code itself?
If you mean "algorithmic complexity", you shouldn't care about a difference of 2.5x.
It's not just at one point; the asymptotics are _the_same_ across the range that I've tested (admittedly, somewhat narrow). I measure local behavior simply as logBase in base of ratio of problem sizes, of the ratio of run times.
If you mean "actual performance for a particular task", you should measure the performance in realistic conditions. Namely, if you're implementing a program that needs efficient generation of primes, won't you compile it with -O2?
If I realistically needed primes generated in a real life setting, I'd probably had to use some C for that. If OTOH we're talking about a tutorial code that is to be as efficient as possible without loosing it clarity, being a reflection of essentials of the problem, then any overly complicated advanced Haskell wouldn't be my choice either. And seeing that this overly-complicated (IMO), steps-jumping PQ-based code was sold to us as the only "faithful" rendering of the sieve, I wanted to see for myself whether this really holds water.