
John Meacham wrote:
On Sat, Jul 04, 2009 at 01:08:41AM +0200, Henning Thielemann wrote:
Ross Paterson schrieb:
On Tue, Jun 30, 2009 at 01:37:05PM +0200, Henning Thielemann wrote:
This sounds like a rather ad-hoc extension of the already complicated hierarchy of those category classes. Are there particular examples where the specialisation is needed and where it cannot be done by optimizer rules? Here's one for (<$). In Data.Sequence, I could define
x <$ s = replicate (size s) x
(using Louis Wasserman's replicate), which would take O(log n) time and space, a big improvement over the O(n) version using const and fmap. Would it be reasonable to let the optimizer replace (x <$ s) by (replicate (size s) x) via RULES?
I don't like using RULES for optimizations that actually change the computational or space complexity of code.
It was said, that the new methods should equal the default definitions, that is, they cannot be "optimized too much", e.g. the specialized definitions are not allow to produce something defined where the default definition is undefined. That's the situation where RULES are made for. It's sad that application of RULES is so unreliable and maybe that should be seriously improved. However, when doing optimization via type class methods I see the danger that after splitting all standard type classes down to one method per class in the past years we will see another flood of extending the type class by specialized functions, maybe followed by new splits. Maybe such a development is a good thing, but then again, we still have no good tools to keep in sync with all these modifications.