It seems to me that lower arity definitions could be a good thing.

They'd permit the inliner to pull the definition in in more contexts.

There is a minor sublety around strictness in your example for Maybe. 

There:

(<*>) undefined = undefined

whereas with the normal definition

(<*>) undefined = const undefined

But this is one of those rare cases where I don't think anyone can build up a viable argument for where they've ever used that particular bit of laziness, and in fact, the extra eta expansion wrapper has probably been a source of subtle optimization pain for a long time.

Consider me a tentative +1 unless someone can come up with a deal breaker.

-Edward

On Sun, Nov 2, 2014 at 4:29 AM, David Feuer <david.feuer@gmail.com> wrote:
http://hackage.haskell.org/package/base-4.7.0.1/docs/Control-Applicative.html says that

    The other methods have the following default definitions,
    which may be overridden with equivalent specialized
    implementations:

    u *> v = pure (const id) <*> u <*> v


and

    If f is also a Monad, it should satisfy

    ...
    (<*>) = ap


The (potential) trouble is that these have higher arities than is always natural. For example, it would seem reasonable to say

    (<*>) Just f = fmap f
    (<*>) Nothing = const Nothing

and to replace the default definition of (*>) like so:

    (*>) a1 = (<*>) (id <$ a1)

but these are not strictly equivalent because they have arity 1 instead of arity 2. Would such definitions be be better in some cases? If so, should we weaken the rules a bit?

_______________________________________________
Libraries mailing list
Libraries@haskell.org
http://www.haskell.org/mailman/listinfo/libraries