
Of course I can't be sure about speed. If you make a recursive call it will need to check all the guarded cases, by computing the range of quantity it is doing the same work one additional time to get most, but then avoid the work of checking the final failing case that the other solutions relied on. So profiling is definately needed since it moves the work of checking the guard, not changes it. BUG: I wrote "prefix" where I needed to write "concat". And I left the base cases off, since others have already shown what they are. If you wanted a different return type using (coin,quantity) tuples, then you could change (replicate quantity x) to (x,quantity) and use "prefix". Starting with the largest quantity makes it return the results in reverse order to most of the other examples, which may or may not be what is wanted. Assuming sorted unique coins means I don't need Cale's (filter (<= x coins)) over and over again. Radu Grigore wrote:
On 7/13/05, ChrisK
wrote: Sort the list of integers, highest at the front of the list. (And perhaps remove duplicates with nub)
The first time I wrote in the comments that 'partition' takes a "decreasing list of integers..." and then I decided to drop "decreasing". Weakest precondition :)
When you pop the first element you can already compute the range of quantity you will need,
Is that really faster? I wouldn't be sure without profiling..