It seems to me that lower arity definitions could be a good thing.They'd permit the inliner to pull the definition in in more contexts.There is a minor sublety around strictness in your example for Maybe.There:(<*>) undefined = undefinedwhereas with the normal definition(<*>) undefined = const undefinedBut this is one of those rare cases where I don't think anyone can build up a viable argument for where they've ever used that particular bit of laziness, and in fact, the extra eta expansion wrapper has probably been a source of subtle optimization pain for a long time.Consider me a tentative +1 unless someone can come up with a deal breaker.-EdwardOn Sun, Nov 2, 2014 at 4:29 AM, David Feuer <david.feuer@gmail.com> wrote:_______________________________________________but these are not strictly equivalent because they have arity 1 instead of arity 2. Would such definitions be be better in some cases? If so, should we weaken the rules a bit?and to replace the default definition of (*>) like so:The (potential) trouble is that these have higher arities than is always natural. For example, it would seem reasonable to sayhttp://hackage.haskell.org/package/base-4.7.0.1/docs/Control-Applicative.html says thatand
The other methods have the following default definitions,
which may be overridden with equivalent specialized
implementations:
u *> v = pure (const id) <*> u <*> v
If f is also a Monad, it should satisfy
...
(<*>) = ap(<*>) Just f = fmap f(<*>) Nothing = const Nothing
(*>) a1 = (<*>) (id <$ a1)
Libraries mailing list
Libraries@haskell.org
http://www.haskell.org/mailman/listinfo/libraries
_______________________________________________
Libraries mailing list
Libraries@haskell.org
http://www.haskell.org/mailman/listinfo/libraries