
Minor correction to avoid confusing anyone. I was wrong about lifting more
than lowering for ix. But if you were implementing something that traverses
multiple (or even all) elements, that could be significant, I think.
On Apr 28, 2016 6:17 PM, "Edward Kmett"
Good catch. We might see if Coyoneda works better than Yoneda for `fusing` in lens in practice as well.
-Edward
AHA! I was wrong! There *is* (some) hope! I should've realized this earlier. The trick is to use Coyoneda instead of Yoneda. Coyoneda is effectively lazier than Yoneda, so nodes can become available as soon as they're passed. I suspect it may also be helpful that while liftYoneda is pricier than lowerYoneda for non-trivial functors, the opposite holds for Coyoneda. Since adjustA does a lot more lifting than lowering, Coyoneda seems like a better fit. A thoroughly straightforward Coyoneda-based implementation offers *decent* (although not wonderful) viewing performance. But as soon as it's passed something that reads *and* modifies, it does a little better than the fake version.
On Wed, Apr 27, 2016 at 10:15 PM, David Feuer
wrote: Using Yoneda greatly improved performance for Identity, and raised the performance for [] a little above that of the "fake" version lens currently uses. But the performance for simply viewing an element is awful (it takes about twice as long as `index`), and I'm beginning to doubt there's any solution that will get the (small) speed-up in some cases without getting a (large) slow-down elsewhere. Oh well.
On Wed, Apr 27, 2016 at 6:39 PM, Edward Kmett
wrote: That transformation is effectively what 'fusing' does. It CPSes (er.. Yonedas, actually) the functor `f` as forall r. (a -> r) -> f r, and accumulates the changes by composing in more functions, so that it can all be discharged with one fmap. You should be able to achieve the same result by hand with a manual worker/wrapper transformation.
-Edward
On Thu, Apr 28, 2016 at 8:20 AM, David Feuer
wrote: I was thinking switching to CPS might help, passing the context down on the way to the leaf, so the tree is built "inside" the functor. This is
easy for nodes and digits, but I'm struggling with the polymorphic recursion in the spine right now. I think I'll probably figure it out eventually.
On Apr 27, 2016 6:12 PM, "Edward Kmett"
wrote: For cases were the Functor winds up particularly complicated or won't be known at compile time so the inlining can't help, it turns out you can usually work around this with the 'fusing' combinator from the lens library. This works around the repeated stacks of `fmap` that
get
built up otherwise, transforming everything into one gigantic fmap all in one go.
There is a similar `confusing` combinator for working with
On Thu, Apr 28, 2016 at 2:48 PM, David Feuer
wrote: pretty traversals. The implementation lives up to the name.
-Edward
On Wed, Apr 27, 2016 at 5:19 PM,
wrote: > On Tue, Apr 26, 2016, at 15:03, David Feuer wrote: >> Way back when, Shachaf Ben-Kiki suggested an efficient implementation >> of `at` for Data.Map in >> https://mail.haskell.org/pipermail/libraries/2013-May/019761.html but >> that was never approved for inclusion. > > Funny thing, I just tried doing that not to long ago with `at`: > > https://github.com/haskell/containers/pull/192 > > In the end, the results were mixed. It was a small optimization in some > situations, and a moderate pessimization in others (esp. when the > Functor becomes too complex for GHC to optimize). > > -- > RF > _______________________________________________ > Libraries mailing list > Libraries@haskell.org > http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries _______________________________________________ Libraries mailing list Libraries@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/libraries