
On Tuesday 27 September 2011, 13:46:52, Yitzchak Gale wrote:
I wrote:
After running some tests, it seems empirically that you can use m `div` 2 instead of m for lists of even length, which gives you a nice speedup for larger values of m. For lists of prime length, the best you can do seems to be m-1. It's more complicated for lists of non-prime odd length. I can't immediately see why any of that is true.
Well, obviously, that weird behavior has to do with the regularity of my test data. If I randomize more, the results become more believable. It seems that m-1 or m-2 work empirically, but that could just be because the probability of a collision is extremely low.
Yes. With less regular data, we need to consider all chunks of length d in the input, not only those starting at index k*d. If any of them meets three m-chunks (where m is the parameter to sortAlmostBy), the precondition of sortAlmostBy may be violated. The smallest m to guarantee the preconditions is then m = d-1. But with less regular data, if the list is almost monotonically increasing, the probability that the last element of a d-chunk is smaller than the first is significantly lower than 1/2, so you have a much bigger chance that m = d-2 will work than in the regular case for odd d. [To disambiguate: parenthesise as ... than (in the ... odd d).]