Removing the biggest element from a list - maybe slow?

Hi, I want to remove the biggest element from a list: withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice. Can this be done faster? Regards, Nathan

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
I want to remove the biggest element from a list:
withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs
Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice.
Can this be done faster?
import Data.List init sort xs or import Data.List delete (maximum xs) xs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkv6fqcACgkQKUpCd+bV+ko55wCbB/AVbb9OhfGK5ObsAc4yxVFH YigAnjudQlhBThF2IvUOjXFknAxBHUnN =XuKY -----END PGP SIGNATURE-----

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs
Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice.
Can this be done faster?
import Data.List init sort xs
or
import Data.List delete (maximum xs) xs
that should be, import Data.List init $ sort xs -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iEYEARECAAYFAkv6gBAACgkQKUpCd+bV+kq3aACfZFmIK3ChuVky9qWqLGYc2rrt Np4An06oMtwCIu9pEYNumrX6N0Y5hFYn =jKVY -----END PGP SIGNATURE-----

On Mon, May 24, 2010 at 09:27, Lafras Uys
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
I want to remove the biggest element from a list:
withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs
Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice.
Can this be done faster?
import Data.List init sort xs
This is not linear time. It also doesn't maintain the order of the list.
or
import Data.List delete (maximum xs) xs
I don't think you are going to do better than this. Just by the nature of the problem, you need to go through the list twice: either forward initially and then backward during the "return phase" of your function, or twice forward with a pair of tail-recursive operations. The suggestion above does the latter and, I believe, achieves optimal expenditures of both time and space.
-----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.9 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org
iEYEARECAAYFAkv6fqcACgkQKUpCd+bV+ko55wCbB/AVbb9OhfGK5ObsAc4yxVFH YigAnjudQlhBThF2IvUOjXFknAxBHUnN =XuKY -----END PGP SIGNATURE----- _______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners
-- mac

On Mon, 24 May 2010 15:27:03 +0200
Lafras Uys
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
I want to remove the biggest element from a list:
withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs
Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice.
Can this be done faster?
import Data.List init sort xs
or
import Data.List delete (maximum xs) xs
I see. I would think, the first solution takes still to much time because it needs to sort the list. Is "delete" a fast operation (O(1))? or does it internally traverse the list? Thanks! Nathan

It is linear time, implemented roughly as below
delete z [] = []
delete z (x:xs) = if x=z then delete z xs else x:(delete z xs)
On Mon, May 24, 2010 at 09:42, Nathan Huesken
On Mon, 24 May 2010 15:27:03 +0200 Lafras Uys
wrote: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
I want to remove the biggest element from a list:
withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs
Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice.
Can this be done faster?
import Data.List init sort xs
or
import Data.List delete (maximum xs) xs
I see. I would think, the first solution takes still to much time because it needs to sort the list.
Is "delete" a fast operation (O(1))? or does it internally traverse the list?
Thanks! Nathan _______________________________________________ Beginners mailing list Beginners@haskell.org http://www.haskell.org/mailman/listinfo/beginners
-- mac

On Monday 24 May 2010 15:49:35, matthew coolbeth wrote:
It is linear time, implemented roughly as below
delete z [] = [] delete z (x:xs) = if x=z then delete z xs else x:(delete z xs)
delete only deletes the first occurrence, so it's equivalent to delete z (x:xs) | x == z = xs | otherwise = x : delete z xs delete _ [] = [] , it's O(index of z) thus.

delete (maximum xs) xs
On a list, that is internally too a list, the complexity of the operation is always at least O(n), because to find the maximum, you have to look through all the list -- i.e do n comparisons. If you don't have any other variables like pointers to the previous and next elements of the maximum, when you are going trough the list to find it, then after finding the maximum value, you have to go through the list again, to find its position and remove it... For the complexity is O(n + m), where m is the index of the maximum. This goes through the list to find the maximum, and then goes through it again up until it finds the first occurrance of it. For
init $ sort xs it is (n log n) + n, where (n log n) is the complexity of the sorting algorithm (it might be something else depending on the algorithm), and (+ n) is because the init funcion has to go through the list again to find its last element and remove it.
Although Haskell is lazy, and when sorting a list, it might not get fully sorted until you request the last element from the sorted list, the init function still forces it to do so. I don't know how one would do the removing of the maximum by going through the list and whenever finding a bigger element, then saving the positions of its predecessor and successor, so it could reassemble the list in the middle, when it has done looking through it. Anyone have ideas?

On Monday 24 May 2010 16:15:32, Markus Läll wrote:
On a list, that is internally too a list, the complexity of the operation is always at least O(n), because to find the maximum, you have to look through all the list -- i.e do n comparisons.
If you don't have any other variables like pointers to the previous and next elements of the maximum, when you are going trough the list to find it, then after finding the maximum value, you have to go through the list again, to find its position and remove it...
For
delete (maximum xs) xs
the complexity is O(n + m), where m is the index of the maximum. This goes through the list to find the maximum, and then goes through it again up until it finds the first occurrance of it.
For
init $ sort xs
it is (n log n) + n, where (n log n) is the complexity of the sorting algorithm (it might be something else depending on the algorithm), and (+ n) is because the init funcion has to go through the list again to find its last element and remove it.
And since we changed the order of elements anyway, drop 1 $ sortBy (flip compare) xs saves us the last traversal. But if the order is changed, it can be done in O(n).
Although Haskell is lazy, and when sorting a list, it might not get fully sorted until you request the last element from the sorted list, the init function still forces it to do so.
I don't know how one would do the removing of the maximum by going through the list and whenever finding a bigger element, then saving the positions of its predecessor and successor, so it could reassemble the list in the middle, when it has done looking through it. Anyone have ideas?
remLargest :: Ord a => [a] -> [a] remLargest [] = [] remLargest [_] = [] remLargest (x:xs) = go [] x xs where go post _ [] = reverse post go post mx (y:ys) | mx < y = mx : reverse post ++ go [] y ys | otherwise = go (y:post) mx ys Not as ugly as I feared, and not as inefficient for descending lists as I feared.

On Monday 24 May 2010 16:47:50, Daniel Fischer wrote:
But if the order is changed, it can be done in O(n).
And by that I meant "in O(n) time with one traversal and in constant space", since delete (maximum xs) xs is O(n) time too and doesn't change the order.

On Mon, May 24, 2010 at 04:47:50PM +0200, Daniel Fischer wrote:
remLargest :: Ord a => [a] -> [a] remLargest [] = [] remLargest [_] = [] remLargest (x:xs) = go [] x xs where go post _ [] = reverse post go post mx (y:ys) | mx < y = mx : reverse post ++ go [] y ys | otherwise = go (y:post) mx ys
Doesn't retain the order of the list: removeLargest (x:xs@(_:_)) = go x xs where go x [] = [] go x (x2:xs) | x < x2 = x : go x2 xs | otherwise = x2 : go x xs removeLargest _ = [] Traverses only once, so it is O(n). -- Felipe.

On Monday 24 May 2010 14:42:28, Nathan Huesken wrote:
Hi,
I want to remove the biggest element from a list:
withoutBiggest (x:xs) = withoutBiggestImpl (biggest x xs) [] (x:xs) where biggest :: (Ord a) => a -> [a] -> a biggest big [] = big biggest big (x:xs) = if x > big then biggest x xs else biggest big xs withoutBiggestImpl :: (Eq a) => a -> [a] -> [a] -> [a] withoutBiggestImpl big before (x:xs) = if big == x then before ++ xs else withoutBiggestImpl big (before ++ [x]) xs
Just to make sure, you are aware that this removes only the first occurrence of the largest element if there are several? In that code, collecting the before-list is a) unnecessary b) inefficient Re b), consider the list [1 .. n] for some large n. Then you have wBI n [] [1 .. n] ~> wBI n ([]++[1]) [2 .. n] ~> wBI n (([] ++ [1]) ++ [2]) [3 .. n] ~> wBI n ((([] ++ [1]) ++ [2]) ++ [3]) [4 .. n] ~> ... ~> wBI n ((...((([] ++ [1]) ++ [2]) ++ [3]) ...) ++ [n-1]) [n] ~> ((...((([] ++ [1]) ++ [2]) ++ [3]) ...) ++ [n-1]) And: Prelude> let func :: Int -> Int; func n = last (foldl (\xs k -> xs ++ [k]) [] [1 .. n]) (0.00 secs, 524224 bytes) Prelude> func 5000 5000 (0.44 secs, 351528996 bytes) Prelude> func 10000 10000 (2.63 secs, 1404077120 bytes) Prelude> func 20000 20000 (20.03 secs, 5613242020 bytes) Ouch. The short code to achieve the same is (as has been posted before) import Data.List withoutBiggest [] = [] withoutBiggets xs = delete (maximum xs) xs That also traverses the list twice, but is much faster because it doesn't build a left-nested chain of (++)-applications. Like your code, it requires the entire list to be in memory though. If you need the elements in the original order (except the first occurrence of the maximum), you can't completely eliminate that memory requirement, though you can reduce it somewhat on average (ugly code, not more efficient than delete (maxumum xs) xs in general, worst case considerably slower). If you don't need to retain the order, you can efficiently do it with one traversal. withoutLargest [] = [] withoutLargest (x:xs) = go x xs where go _ [] = [] go p (y:ys) | p < y = p : go y ys | otherwise = y : go p ys
Works, but I am a little concerned that this is slower than needed, because the list has to be iterated twice.
Can this be done faster?
Regards, Nathan

On Mon, May 24, 2010 at 4:01 PM, Daniel Fischer
If you don't need to retain the order, you can efficiently do it with one traversal.
withoutLargest [] = [] withoutLargest (x:xs) = go x xs where go _ [] = [] go p (y:ys) | p < y = p : go y ys | otherwise = y : go p ys
And to be explicit (Daniel implied that) this version is also much more interesting from a memory point of view since it can start producing the resulting list almost immediately, which means it can be used as a filter in a lazy pipeline able to handle infinite or just too big lists. On the other hand, if you often need to perform this operation (removal of the maximum) and don't care about the order, there are much better data structures to do this, particularly the priority queue. There are some good implementations of this on Hackage. (NB : Many of those implementations only provide for removing the minimum, but that only means that you have to change the definition of minimum so that it be your maximum : newtype FlipOrd a = FO a instance (Ord a) => Ord (FO a) where compare (FO a) (FO b) = compare b a ) -- Jedaï

On Tuesday 25 May 2010 10:21:01, Chaddaï Fouché wrote:
On Mon, May 24, 2010 at 4:01 PM, Daniel Fischer
wrote: If you don't need to retain the order, you can efficiently do it with one traversal.
withoutLargest [] = [] withoutLargest (x:xs) = go x xs where go _ [] = [] go p (y:ys) | p < y = p : go y ys | otherwise = y : go p ys
And to be explicit (Daniel implied that) this version is also much more interesting from a memory point of view since it can start producing the resulting list almost immediately, which means it can be used as a filter in a lazy pipeline able to handle infinite or just too big lists.
On the other hand, if you often need to perform this operation (removal of the maximum) and don't care about the order, there are much better data structures to do this, particularly the priority queue. There are some good implementations of this on Hackage.
Yes. Very much yes. Unless you need to perform this operation really often but never (or almost never) need to insert new values. Then sortBy (flip compare) once and repeatedly tail afterwards may be even better than a priority queue.
(NB : Many of those implementations only provide for removing the minimum, but that only means that you have to change the definition of minimum so that it be your maximum : newtype FlipOrd a = FO a instance (Ord a) => Ord (FO a) where compare (FO a) (FO b) = compare b a )

Here is an ugly one: remLargest2 [] = [] remLargest2 (li:st) = if something_bigger_in_tail then (li:result) else result where ismax [] previous = ([], False) ismax (current:rest) previous = case (current_is_bigger_than_previous, but_something_even_bigger_in_tail) of (True, True) -> (current:newRest, True) (False, True) -> (current:newRest, True) (False, False) -> (current:newRest, False) (True, False) -> ( newRest, True) -- current is the biggest, lets leave it out where f = ismax rest current_is_bigger_than_previous = current > previous (newRest, but_something_even_bigger_in_tail) = if current_is_bigger_than_previous then f current else f previous (result, something_bigger_in_tail) = ismax st li Besides the long names, could this be done somehow shorter? The idea of it is to carry the maximum in 'previous' and compare it with every element when recursing the list. When recursion reaches the end, it starts to return, and on every step it tells the previous step if there was something bigger down it's road of recursion or not. This way every step knows if to drop its 'current' element -- this drop happens only once. So the steps it takes are defenitely 2n, because it rolls out, and then has to return all the way -- even to get the first element (because for it theres the question: "drop it or not?"). The order-not-retaining functions thus far are faster, taking only n steps. The performance of Daniels remLargest depends on the order of elements in the list: best case is if the list is grows like (=<), so there's no use of an accumulator and concatenation: worst case is when the list is composed of strictly descending lists -- then the time it takes is 3n (n for traversing the list, n for reversing all sublists and n for concatenating). Cool problem ;-) And we should do tests!

On Tuesday 25 May 2010 17:30:25, Markus Läll wrote:
Here is an ugly one:
<moved below>
Besides the long names, could this be done somehow shorter?
In particular, it could be done faster. In fact, empirically, this seems to be rather O(n^f(n)), where f is an increasing function with values > 1: (as usual, all code compiled with -O2, performance characteristics can be very different if interpreted or compiled without optimisations) --------------------------------------------------------------------- Prelude WithoutBiggest> testDec remLargest2 100000 1 (0.09 secs, 15961808 bytes) Prelude WithoutBiggest> testDec remLargest2 200000 1 (0.23 secs, 29848872 bytes) Prelude WithoutBiggest> testDec remLargest2 400000 1 (0.68 secs, 59199984 bytes) Prelude WithoutBiggest> testDec remLargest2 800000 1 (2.39 secs, 118963628 bytes) Prelude WithoutBiggest> testDec remLargest2 1600000 1 (8.89 secs, 237443716 bytes) Prelude WithoutBiggest> testAsc remLargest2 100000 99999 (0.09 secs, 15426136 bytes) Prelude WithoutBiggest> testAsc remLargest2 200000 199999 (0.24 secs, 29851428 bytes) Prelude WithoutBiggest> testAsc remLargest2 400000 399999 (0.69 secs, 59200792 bytes) Prelude WithoutBiggest> testAsc remLargest2 800000 799999 (2.38 secs, 118437128 bytes) Prelude WithoutBiggest> testAsc remLargest2 1600000 1599999 (8.95 secs, 236917064 bytes) Prelude WithoutBiggest> testBounce remLargest2 100000 -25501 (0.10 secs, 18231012 bytes) Prelude WithoutBiggest> testBounce remLargest2 200000 29107 (0.28 secs, 32470584 bytes) Prelude WithoutBiggest> testBounce remLargest2 400000 -21646 (0.82 secs, 64440016 bytes) Prelude WithoutBiggest> testBounce remLargest2 800000 -29987 (2.83 secs, 128922424 bytes) Prelude WithoutBiggest> testBounce remLargest2 1600000 -6415 (10.62 secs, 256836988 bytes) --------------------------------------------------------------------- Whereas remLargest shows more or less linear behaviour -------------------------------------------------------------------- Prelude WithoutBiggest> testDec remLargest 100000 1 (0.03 secs, 6817632 bytes) Prelude WithoutBiggest> testDec remLargest 200000 1 (0.06 secs, 13109604 bytes) Prelude WithoutBiggest> testDec remLargest 1600000 1 (0.60 secs, 102758712 bytes) Prelude WithoutBiggest> testAsc remLargest 100000 99999 (0.02 secs, 9715556 bytes) Prelude WithoutBiggest> testAsc remLargest 200000 199999 (0.03 secs, 19286292 bytes) Prelude WithoutBiggest> testAsc remLargest 1600000 1599999 (0.23 secs, 154411292 bytes) Prelude WithoutBiggest> testBounce remLargest 100000 -25501 (0.03 secs, 10369436 bytes) Prelude WithoutBiggest> testBounce remLargest 200000 29107 (0.08 secs, 17137072 bytes) Prelude WithoutBiggest> testBounce remLargest 1600000 -6415 (0.65 secs, 123919124 bytes) --------------------------------------------------------------------- and is only a little slower (except for ascending lists, where it's more than a little) than a faster implementation of your algorithm: --------------------------------------------------------------------- Prelude WithoutBiggest> testDec remLargest3 1600000 1 (0.52 secs, 174594652 bytes) Prelude WithoutBiggest> testAsc remLargest3 1600000 1599999 (0.08 secs, 115863632 bytes) Prelude WithoutBiggest> testBounce remLargest3 1600000 -6415 (0.60 secs, 193472764 bytes) --------------------------------------------------------------------- Except for ascending lists, the order-changing algorithm is *much* faster, however: Prelude WithoutBiggest> testDec withoutLargest 1600000 1 (0.16 secs, 116646100 bytes) Prelude WithoutBiggest> testAsc withoutLargest 1600000 1599999 (0.12 secs, 116352076 bytes) Prelude WithoutBiggest> testBounce withoutLargest 1600000 -6415 (0.21 secs, 134971508 bytes)
The idea of it is to carry the maximum in 'previous' and compare it with every element when recursing the list. When recursion reaches the end, it starts to return, and on every step it tells the previous step if there was something bigger down it's road of recursion or not. This way every step knows if to drop its 'current' element -- this drop happens only once.
So the steps it takes are defenitely 2n, because it rolls out, and then has to return all the way -- even to get the first element (because for it theres the question: "drop it or not?").
But these steps are comparatively expensive. And you must be careful not to be too strict, so you can start bulding the result before you've reached the end.
The order-not-retaining functions thus far are faster, taking only n steps.
Well, 'far' is relative.
The performance of Daniels remLargest depends on the order of elements in the list: best case is if the list is grows like (=<), so there's no use of an accumulator and concatenation: worst case is when the list is composed of strictly descending lists -- then the time it takes is 3n (n for traversing the list, n for reversing all sublists and n for concatenating).
Actually, there's no big difference between strictly descending lists and sufficiently bouncy lists. And it can be made a tiny bit faster by using go post mx (y:ys) | mx < y = mx : foldl (flip (:)) (go [] y ys) post instead of "mx : reverse post ++ go [] y ys" Let's look a bit at your code now.
remLargest2 [] = [] remLargest2 (li:st) = if something_bigger_in_tail then (li:result) else result where ismax [] previous = ([], False) ismax (current:rest) previous = case (current_is_bigger_than_previous, but_something_even_bigger_in_tail) of (True, True) -> (current:newRest, True) (False, True) -> (current:newRest, True) (False, False) -> (current:newRest, False) (True, False) -> ( newRest, True) -- current -- is the biggest, lets leave it out
That 'case' is bad. Pattern matching on both Bools at once means you can't do anything until the second is available. Since that comes from another call to ismax, which again pattern matches on two Bools, you must wait until the recursion has reached the end of the list. In particular, you get nothing (but a heap-exhaustion) for infinite lists (the lazier variant produces the initial segment before the maximum fine and only hangs when that is reached). When the end is reached, the recursion is unwound, thunk after thunk is converted to a list-node (except for the maximum). I think the space behaviour is responsible for the worse than linear running time.
where f = ismax rest current_is_bigger_than_previous = current > previous (newRest, but_something_even_bigger_in_tail) = if current_is_bigger_than_previous then f current else f previous (result, something_bigger_in_tail) = ismax st li
Another problem with the code is the wrong argument order of ismax, it's better to have the (usually small) argument which will often remain the same in the recursive call first, the (usually larger) list which changes in every call second. So let's make the algorithm lazier: remLargest3 :: Ord a => [a] -> [a] remLargest3 [] = [] remLargest3 (li:st) | somethingBigger = li : result | otherwise = result where (somethingBigger,result) = ismax li st ismax pre (cur:rest) -- We can immediately know whether the current element -- is larger than the previous maximum. If that is the case, -- we can immediately signal to keep the previous elements. | pre < cur = let (evenBigger,newRest) = ismax cur rest in (True, if evenBigger then cur:newRest else newRest) -- Otherwise, we know that the current element is kept. | otherwise = let (evenBigger,newRest) = ismax pre rest in (evenBigger,cur:newRest) ismax _ [] = (False,[]) We now use every bit of information as soon as it is available. For ascending lists, that means that we only traverse the list once, building the result list from front to back - very fast. For decreasing lists, we also build the result list while traversing the list from front to back, although we don't know yet that we don't prepend another item to it. On the way back to the front, we only need to pass False and have no other work until we finally see that we must drop the original head. For bouncy lists, we can deliver chunks whenever we find a new maximum until we find the global maximum, when the traversal pattern becomes that of a decreasing list. A nice variant passing functions instead of Bools, it's equally fast for decreasing and bouncy lists, but for reasons I don't see yet, it's slower for ascending lists: remLargest4 :: Ord a => [a] -> [a] remLargest4 [] = [] remLargest4 (li:st) = li `f` result where ign _ xs = xs (f,result) = ismax li st ismax pre (cur:rest) | pre < cur = ((:),let (g,more) = ismax cur rest in g cur more) | otherwise = let (g,more) = ismax pre rest in (g,cur:more) ismax _ [] = (ign,[])
Cool problem ;-)
Indeed 8-)
And we should do tests!
participants (7)
-
Chaddaï Fouché
-
Daniel Fischer
-
Felipe Lessa
-
Lafras Uys
-
Markus Läll
-
matthew coolbeth
-
Nathan Huesken