The current categories package tries to let you reuse bifunctors of the form c * d -> e across base categories, so that e.g. Kleisli m can use (,) for its product, etc. (in general it isn't a real product for Kleisli m, but let's pretend) this means we have to adopt a 3 for 2 policy wrt the categories involved. c e -> d, c d -> e, d e -> c, but when inferring from first or second you usually only know one, but also only need one other, but can't know the third from context, since no arrow is supplied. It is underdetermined.
Hask avoids this by presupposing a single set of categories for every Bifunctor, but it is a conceit of the package that this is enough in practice, not something I know, and we pay a measurable price for this. A lot of stuff is more complex there! The current hask design, as opposed to the one I gave a talk on and the design of categories loses a lot of what powers lens and the like in exchange for this simpler story. I don't make that change lightly and I'd hesitate to make that change across the entire ecosystem. It hasn't proved it is worth the burden yet.
In exchange for this complexity it is able to find simplicity elsewhere, e.g. Bifunctor in hask is a derived concept from a Functor to a Functor category, which models the curried nature of the arguments to a Bifunctor.
My main point is there are a lot of points in the design space, and I don't think we're equipped to find a clear winner right now.
Sent from my iPad
Oh darn, really? That is so disappointing. Why can't it maintain inference? :\