
On Sun, 20 Oct 2002, Duncan Coutts wrote:
On Sun, 20 Oct 2002 15:08:30 +0300 Eray Ozkural
wrote: You guys know about the expected running time of randomized quicksort, right? There is also the heapsort algorithm. Both are fine for *any* data that you can't do with a bucket/bin sort.
I've often wondered why the heapsort isn't used for the standard sort function.
I heard that one reason hugs used insertion sort was for good lazy behaviour, you don't have to pay for the whole N^2 sort at once, each element that is actually requred pays just N.
With heapsort you'd get this behaviour too. You pay N for the first element (to build the heap) and then log N for each subsequent element.
Quicksort on the other hand is monolithic as is mergsort (right?).
I don't know whether as implemented it is effectively monolithic, but surely it needn't be: if you ask for just the first item then at each level you only do the partition corresponding to the elements smaller than the partition, i.e. assuming `perfect partitioning' you do n + n/2 + n/4 +... approx 2n work to find the smallest element. What I'm not sure about is that this is done by creating a tree structure (either explicitly or implicitly) and I haven't throught about whether flattening that to a list is done in a way which minimises the cost of producing the whole list at the expense of making it less lazy. ___cheers,_dave_________________________________________________________ www.cs.bris.ac.uk/~tweed/ | `It's no good going home to practise email:tweed@cs.bris.ac.uk | a Special Outdoor Song which Has To Be work tel:(0117) 954-5250 | Sung In The Snow' -- Winnie the Pooh