
2008/11/13 Conal Elliott
As Magnus pointed out in his (very clever) paper, the Applicative interface allows for more precise/efficient tracking of dependencies, in that it eliminates accidental sequentiality imposed by the Monad interface. (Magnus didn't mention Applicative by name, as his paper preceded Idiom/Applicative.) However, I don't see an Applicative instance in the library.
I was wondering about that, actually. The specialized Applicative instance mentioned in the paper adds an additional storage cell to the computation graph. I would expect that for simple computations, using the applicative instance could perform worse than "naive" monadic code. For example, given three adaptive references "ref1", "ref2", and "ref3" :: Adaptive Int do a <- ref1 b <- ref2 c <- ref3 return (a+b+c) This computation adds three "read" nodes to the current "write" output. But this code: do (\a b c -> a + b + c) <$> read ref1 <*> read ref2 <*> read ref3 using (<*>) = the enhanced version of "ap" from the paper. I believe this would allocate additional cells to hold the results of the function partially applied to the intermediate computations. Overall I think this would not be a win in this case. But it is also easy to construct cases where it *is* a win. I suppose you can allow the user to choose one or the other, but my general expectation for Applicative is that (<*>) should never perform *worse* than code that uses Control.Monad.ap Is this the right understanding, or am I totally missing something from the Adaptive paper? -- ryan