
On Monday 24 May 2010 16:15:32, Markus Läll wrote:
On a list, that is internally too a list, the complexity of the operation is always at least O(n), because to find the maximum, you have to look through all the list -- i.e do n comparisons.
If you don't have any other variables like pointers to the previous and next elements of the maximum, when you are going trough the list to find it, then after finding the maximum value, you have to go through the list again, to find its position and remove it...
For
delete (maximum xs) xs
the complexity is O(n + m), where m is the index of the maximum. This goes through the list to find the maximum, and then goes through it again up until it finds the first occurrance of it.
For
init $ sort xs
it is (n log n) + n, where (n log n) is the complexity of the sorting algorithm (it might be something else depending on the algorithm), and (+ n) is because the init funcion has to go through the list again to find its last element and remove it.
And since we changed the order of elements anyway, drop 1 $ sortBy (flip compare) xs saves us the last traversal. But if the order is changed, it can be done in O(n).
Although Haskell is lazy, and when sorting a list, it might not get fully sorted until you request the last element from the sorted list, the init function still forces it to do so.
I don't know how one would do the removing of the maximum by going through the list and whenever finding a bigger element, then saving the positions of its predecessor and successor, so it could reassemble the list in the middle, when it has done looking through it. Anyone have ideas?
remLargest :: Ord a => [a] -> [a] remLargest [] = [] remLargest [_] = [] remLargest (x:xs) = go [] x xs where go post _ [] = reverse post go post mx (y:ys) | mx < y = mx : reverse post ++ go [] y ys | otherwise = go (y:post) mx ys Not as ugly as I feared, and not as inefficient for descending lists as I feared.