
On Tue, Oct 17, 2006 at 02:05:25AM -0700, John Meacham wrote:
a vital optimization involves finding common sub-parsers, for which it performs a normalization pass and then a hash-consing to find common parts.
Since there is no way to figure out whether two functions are equal, it is forced to be pessamistic and assume they are distinct. If this occurs too often, this creates a blow up in the final number of rules to the point that the parser is no longer practical to use.
I imagine you'd have a similar problem with pure -- perhaps you don't use it much. If seems the problem is that the Applicative interface encourages things like pure f <*> p <*> q <*> r in which the left-associative <*> buries the function in the parser, while you'd prefer fmap (\ ((x,y),z) -> f x y z) (p <> q <> r) (where p <> q is equivalent to pure (,) <*> p <*> r) so presumably you'd want <> as a method in Applicative too.