
G'day all.
On Sun, 20 Oct 2002 15:08:30 +0300
Eray Ozkural
You guys know about the expected running time of randomized quicksort, right?
My guess is that a well-written merge sort would probably beat it over large lists because choosing a random element from a linked list takes O(N) time. I'd like to be proven wrong, though, so why don't you implement it and report back what you find?
There is also the heapsort algorithm. Both are fine for *any* data that you can't do with a bucket/bin sort.
On Sun, Oct 20, 2002 at 01:29:33PM +0100, Duncan Coutts wrote:
I've often wondered why the heapsort isn't used for the standard sort function.
Over linked lists you wouldn't use a heap, but rather you'd use a tree-based priority queue. At this point, my guess is that the cost of intermediate storage might dominate. Once again, don't take my word for it. Try it.
I heard that one reason hugs used insertion sort was for good lazy behaviour, you don't have to pay for the whole N^2 sort at once, each element that is actually requred pays just N.
With heapsort you'd get this behaviour too. You pay N for the first element (to build the heap) and then log N for each subsequent element.
Quicksort on the other hand is monolithic as is mergsort (right?).
Actually, quick sort can be written to take only O(log N) time to pull out an element after an initial O(N) cost in setting things up. This definition will do the trick, assuming that choose_pivot_element costs no more than O(K) time over a list of length K: qsort :: (Ord a) => [a] -> [a] qsort xs = qsort' xs [] qsort' [] = id qsort' [x] = (x:) qsort' xs = let pivot = choose_pivot_element xs in qsort' (filter (<=pivot) xs) . qsort' (filter(>pivot) xs) What it boils down to is that both heap sort and quick sort were designed to sort arrays, so we shouldn't be surprised if they perform poorly on linked lists. Cheers, Andrew Bromage