
The standard, naïve approach to monadic parsing is very nice, but inefficient. So *please read* some material based on Hutton&Meijer approach, but don't stay there, read something more modern,
since we thereby seem to have left the phase of simple answers to simple questions;-) i'd like to raise a pet issue of mine. my own first combinator parsers (inspired by Wadler's "How to replace failure by a list of successes", but adapted to a call-by-value language) were based on continuations. .. ok, now everybody has had time to chime in with "monadic parsers are based on continuations" or "continuations are just one specific monad". so let me return to the particular issue i'm interested in: contrary to monadic parsers, those continuation-based parsers had *two* continuations, one for success, one for failure. and that seemed to be a very natural match for the problem. for all that i like monadic programming in general, i often feel that it is biased towards handling only the success path well, by offering built-in support for a single continuation only. for instance, one can use (Either String) as a parser monad with error messages, but it isn't straightforward to express error handling into that format, preserving both success and failure- related info (such as reporting the error corresponding to the longest partially successful parse). also, negation does not seem to be an easy fit (succeed if a specific parser would not be successful at the current point; this seems to require monad-specific information, so perhaps there's a MonadNegate class missing?). has anyone else had similar experiences with expressive limitations of monadic programming? things that one might be able to work around, but that don't feel as natural or simple as they should be? things that one hasn't been able to express at all (such as Swierstra & Duponcheel's static analysis of combinator parsers which inspired Hughes's proposal to use arrows)? claus