
First post. I'm a newbie, been using Haskell for about a week and love it. Anyway, this is something I don't understand. Parsers are monadic. I can see this if the parser is reading from an input stream but if there's just a block of text can't you just have the parser call itself recursively feeding the unparsed text down the recursion and tacking nodes onto the tree as the recursions return, finally returning the whole tree to the top level caller. Seems this meets the criteria for pure functionality, same result every time with same args. myParser :: [Char] -> ParseTree ____________________________________________________________________________________ Get the free Yahoo! toolbar and rest assured with the added security of spyware protection. http://new.toolbar.yahoo.com/toolbar/features/norton/index.php

Hi Gregory,
First post. I'm a newbie, been using Haskell for about a week and love it. Anyway, this is something I don't understand. Parsers are monadic. I can see this if the parser is reading from an input stream but if there's just a block of text can't you just have the parser call itself recursively feeding the unparsed text down the recursion and tacking nodes onto the tree as the recursions return, finally returning the whole tree to the top level caller. Seems this meets the criteria for pure functionality, same result every time with same args.
Your intuition is right - it's definitely possible to write a parser without using a monad. However, some experience has shown that the most convenient way to structure a recursive descent parser library is with monads (or maybe with arrows, a more general concept). I imagine your suspicion is the result of equating monads with effects. Monads serve many purposes besides dealing with effects elegantly. Many new haskell hackers get their first taste of monads with IO, and get the impression that this is the One True Purpose of monadic programming. You might want to have a look at the paper Monadic Parser Combinators by Graham Hutton and Erik Meijer. This paper serves as a great introduction to both parser combinators and monads, and shows how they are related: http://www.cs.nott.ac.uk/~gmh/bib.html#monparsing The first 10 or 12 pages are particularly helpful in figuring out why monads make sense in parsing. Good luck! --Chris

Big Chris writes to Gregory, who posts:
... something I don't understand. Parsers are monadic. I can see this if the parser is reading from an input stream but if there's just a block of text can't you just have the parser call itself recursively feeding the unparsed text down the recursion and tacking nodes onto the tree as the recursions return, finally returning the whole tree to the top level caller. Seems this meets the criteria for pure functionality, same result every time with same args.
Your intuition is right - it's definitely possible to write a parser without using a monad. However, some experience has shown that the most convenient way to structure a recursive descent parser library is with monads (or maybe with arrows, a more general concept).
I imagine your suspicion is the result of equating monads with effects. Monads serve many purposes besides dealing with effects elegantly. Many new haskell hackers get their first taste of monads with IO, and get the impression that this is the One True Purpose of monadic programming.
You might want to have a look at the paper Monadic Parser Combinators by Graham Hutton and Erik Meijer. /.../
To make it shorter. The first usage of monads in parsing was *not* related to IO, but to the non-determinism of top-down parsers, the usage of lazy lists to store the possible partial parses (and to signal the possible ambiguities). People who used (or taught) this strategy using before, say, Prolog, found it remarkably clear. The standard, naïve approach to monadic parsing is very nice, but inefficient. So *please read* some material based on Hutton&Meijer approach, but don't stay there, read something more modern, for example the PARSEC business http://legacy.cs.uu.nl/daan/download/papers/parsec-paper-letter.pdf http://research.microsoft.com/~emeijer/Papers/Parsec.pdf and also find out a bit about the arrow formalism, as present within the Swierstra & Duponcheel approach to parsing (they, themselves didn't use arrows, but they inspired John Hughes. Unless I am confusing things because of my age...). Anyway, concerning what Big Chris calls a "more general concept": don't be confused, the parsing strategy of S&D is *NOT* monadic. Thera are arrows which are monadic, and others which are not. http://www.cs.uu.nl/people/doaitse/Papers/1996/DetErrCorrComPars.pdf == Jerzy Karczmarczuk

The standard, naïve approach to monadic parsing is very nice, but inefficient. So *please read* some material based on Hutton&Meijer approach, but don't stay there, read something more modern,
since we thereby seem to have left the phase of simple answers to simple questions;-) i'd like to raise a pet issue of mine. my own first combinator parsers (inspired by Wadler's "How to replace failure by a list of successes", but adapted to a call-by-value language) were based on continuations. .. ok, now everybody has had time to chime in with "monadic parsers are based on continuations" or "continuations are just one specific monad". so let me return to the particular issue i'm interested in: contrary to monadic parsers, those continuation-based parsers had *two* continuations, one for success, one for failure. and that seemed to be a very natural match for the problem. for all that i like monadic programming in general, i often feel that it is biased towards handling only the success path well, by offering built-in support for a single continuation only. for instance, one can use (Either String) as a parser monad with error messages, but it isn't straightforward to express error handling into that format, preserving both success and failure- related info (such as reporting the error corresponding to the longest partially successful parse). also, negation does not seem to be an easy fit (succeed if a specific parser would not be successful at the current point; this seems to require monad-specific information, so perhaps there's a MonadNegate class missing?). has anyone else had similar experiences with expressive limitations of monadic programming? things that one might be able to work around, but that don't feel as natural or simple as they should be? things that one hasn't been able to express at all (such as Swierstra & Duponcheel's static analysis of combinator parsers which inspired Hughes's proposal to use arrows)? claus

On Sat, 30 Jun 2007, Claus Reinke wrote:
for all that i like monadic programming in general, i often feel that it is biased towards handling only the success path well, by offering built-in support for a single continuation only.
Certainly one path gets privileged over the others, I don't know I'd go so far as saying none get treated well though.
for instance, one can use (Either String) as a parser monad with error messages, but it isn't straightforward to express error handling into that format, preserving both success and failure- related info (such as reporting the error corresponding to the longest partially successful parse). also, negation does not seem to be an easy fit (succeed if a specific parser would not be successful at the current point; this seems to require monad-specific information, so perhaps there's a MonadNegate class missing?).
Have you used Parsec? The error model is not quite as general as it could be (hmm, perhaps this might be worth a day's hacking for Paolo as part of SoC if he's interested?) but it's certainly suitably compositional and I could start sketching out how to generalise it easily enough. Negation pretty much just works. The paper linked to elsethread explains the mechanisms reasonably well IMO.
has anyone else had similar experiences with expressive limitations of monadic programming? things that one might be able to work around, but that don't feel as natural or simple as they should be? things that one hasn't been able to express at all (such as Swierstra & Duponcheel's static analysis of combinator parsers which inspired Hughes's proposal to use arrows)?
The big gain with arrows is those situations where higher-order computations can't be allowed - that's exactly the restriction that makes S&D-style parsers work. You can even pull my favourite AST-and-interpreter implementation off again, with the static analysis taking a similar role to the interpreter (but also called /by/ the interpreter). -- flippa@flippac.org Performance anxiety leads to premature optimisation

Have you used Parsec?
i read about it when it came out, but i've always defined my own combinators. in case you wonder, there are two reasons for this: (a) the approximation of parsers as monads is close enough that a simple type Parser m a = StateT String m a gives us the basic combinators (sequence,success,failure,alternative) for free. (b) i like my combinator grammars to be reversible, so that a single grammar specification can be used for both parsing and unparsing/pretty-printing. that means i have to define the details myself anyway. about the only thing that spoils this nice setup is error handling. i've done some of the things that parsec does myself, such as labeling productions, keeping positions and unexpected input, and merging of error info from branches, but it just doesn't seem to fit easily: what starts out as an elegant reuse of Monad/MonadPlus/StateT soon looks rather involved (compare also the implementation description in the parsec paper). parsec is a bit more systematic about these things, so it is easier to see where it is headed: a three-valued logic. if one has successful parses, "successful" errors, and failure to produce either as the third outcome, then one can try to merge the success and failure continuations into the single continuation provided by the monadic (>>=). still not very natural/simple, as one always has to deal with both continuations at once, the same way, but perhaps workable.
Negation pretty much just works.
please keep in mind that i was asking for general monadic negation. if one introduces a bias towards the first successful parse (as parsec does, apart from longest-match), one can model negation as failure: not m = (m >> return False) `mplus` return True this only uses Monad and MonadPlus, but it only works as expected for some MonadPlus (those which commit locally to the first success, instead of collecting multiple successes, such as lists, or searching for global success, such as amb, or longest-match parsers). alternatively, one could require an overloaded 'first' method, selecting the first successful branch in an ordered collection of solutions: not' m = first $ (m >> return False) `mplus` return True but that wouldn't work for unordered collection monads. in other words, one can define monadic negation for some specific monads, but not for all monads. but coding implementation knowledge about any specific monad into the source makes the code less general than it could be. so i was wondering whether there should be a class such as MonadChoice (committed choice instead of collections; every MonadChoice gives a MonadPlus, but not vice-versa), or MonadFirst (giving access to the first success), or MonadNegate (providing monadic not by whatever means). then code depending on monadic negation could still be reasonably general (not working with arbitrary monads, but also not restricted to a specific one). btw, in an approach with two continuations, negation is as simple as switching the success and failure continuations. but the merging of success and failure branches is perhaps no less involved than what one gets when implementing the two-kinds-of-success approach. i was just hoping that someone had come up with something more elegant, and i was wondering whether there were other issues that people have to work around again and again. claus

"Claus Reinke"
(b) i like my combinator grammars to be reversible, so that a single grammar specification can be used for both parsing and unparsing/pretty-printing. that means i have to define the details myself anyway.
Oh cool - this is something I have wanted for a long time. Anything released or otherwise available?
about the only thing that spoils this nice setup is error handling. ... so it is easier to see where it is headed: a three-valued logic.
I wrote a set of monadic combinators (the polyparse library) for exactly this reason - improving error messages (in HaXml originally). As indeed, the basic error structure is a three-valued logic, with success and two separate kinds of error: type EitherE a b = Either (Bool,a) b The error categories I used were "recoverable failure", to enable backtracking, and "hard failure" to disallow backtracking when no alternative good parses were possible. The combinator 'commit' changes recoverable failures into hard failures. I guess you could easily model this setup with three continuations instead of a monad. Regards, Malcolm

(b) i like my combinator grammars to be reversible, so that a single grammar specification can be used for both parsing and unparsing/pretty-printing. that means i have to define the details myself anyway.
Oh cool - this is something I have wanted for a long time. Anything released or otherwise available?
and i thought noone had noticed!-) nothing really released - i first used that technique in haskell for a prototype of the reduction system i was modifying for my phd, many years ago. since reduction sytems had syntax oriented editors as interfaces, which i needed to model in my prototype to get the right design context for the language extensions i was working on, i needed parsing/unparsing/editing, and i didn't want three separate specifications to maintain for one and the same grammar. unfortunately, i ultimately had to implement things into the existing reduction systems (think the complexity of ghc and gdh combined, but written in .. c), so i had to put the haskell prototype aside while finishing my phd. when i finally emerged from that work, haskell had long moved on from 1.2, including substantial language changes, and as usual offering no tool support for porting large applications from one language version to the next (will haskell' finally do better in this important aspect?-). i never got around to porting that prototype, and so i had shelved any idea of writing about the technique until recently, when i used it again in a much smaller framework. but i have used the same basic technique, adapted to monadic combinators, for many years, and every time i reimplement the ideas, i tend to play with alternative ways of representing things, especially as the ways of combining the parser and unparser aspects or error handling are concerned. the latest such experiment is not necessarily the simplest variant, but i've just added some text explaining the basic ideas of grammar combinators to the project log (fathom.txt, starting from line 482, or search for 'grammar combinators'), and there's a grammar for a simple lambda-calculus (Lambda.hs, from line 210, or search for 'grammar'), so it should (might?-) be possible to work out the basics from there. the more awkward bits (basic lexing/unlexing, error handling, in Syntax.hs) are without documentation so far, but you might want to write those in a different way anyhow;-). you can get the haskell code and project log via darcs get http://www.cs.kent.ac.uk/~cr3/fathom or the project log directly at http://www.cs.kent.ac.uk/~cr3/fathom/fathom.txt the experiment itself, dubbed 'fathom', might be interesting for other reasons, as it includes a straightforward embedding of conditional rewrite systems in haskell, extended to contextual rewriting, and used to specify normal-order and call-by-need lambda-calculi via a direct embedding of their reduction rules. this gives rather inefficient executable semantics, which are however very close to the operational semantics specifications one tends to find in papers/textbooks. and since they work as monadic text transformers (parse/reduce/unparse), one gets trivial little reduction systems for these calculi (there are even some vim files, as i'm using vim as my user interface to those mini-reduction systems, or am i using haskell to extend vim?-). i doubt that everything will be obvious - there's a lot of text explanation already in the project log, but not all of the code is easily readable (Lambda.hs should be accessible, with the help of the project log), and there's no other documentation yet. please try the project log first, then feel free to ask questions! oh, and please let me know if you like what you see?-) claus

(b) i like my combinator grammars to be reversible, so that a single grammar specification can be used for both parsing and unparsing/pretty-printing. that means i have to define the details myself anyway.
the latest such experiment is not necessarily the simplest variant,
for instance, it explicitly gives semantic functions and their inverses, which i've found to be useable compromise in practice. but if you apply a bit of generic programming to associate an ast node with its constructor and children (or if you use an "untyped" representation of the ast, like 'data AST = Node ConstructorTag [AST]'), with constructor tags that can be compared for equality, you can avoid even that bit of duplication. also, i should have mentioned that the same idea, of using a single grammar specification in parse or unparse mode, opens up a variety of other application possibilities. i already touched upon syntax-directed editing (use a zipper combined with i/o to allow interactive navigation of the ast; associate each type of ast node with its grammar rule; unparse the current node with its rule to display it, let the user edit, then parse the input with the rule for the current node, and continue navigation). another interesting option is to use combinator grammars to specify dialogue protocols: if one annotates the positions in grammar rules where activity switches between dialogue partners, such as server and client communicating according to some protocol, then both server and client can run the same grammar code, in complementary modes, using the switch points to move from producing to expecting dialogue, and vice versa. i've attached a silly little example, using which: $ runhaskell Dialogue.hs server HTTP/1.1 200 Content-Length: 15 Content-Type: text/plain this is a text $ runhaskell Dialogue.hs server | runhaskell.exe Dialogue.hs client this is a text as i said, many opportunities, and i've sometimes wondered why noone else seems to use this technique. perhaps i'll get around to writing it up some day, after all these years.. claus

On Jun 30, 2007, at 6:31 AM, Claus Reinke wrote:
has anyone else had similar experiences with expressive limitations of monadic programming? things that one might be able to work around, but that don't feel as natural or simple as they should be? things that one hasn't been able to express at all (such as Swierstra & Duponcheel's static analysis of combinator parsers which inspired Hughes's proposal to use arrows)?
When you pretend you've never heard of monads or arrows, and write down the types what do you get? When I finally overcame my resistance to monads, I only had to change names in my code to use the Maybe monad, the functions already had the right type. There's an inevitability to monads and arrows, and perhaps to what you're thinking, if it's a third species in a lazy list we're evaluating of such things. -- Haskell does suffer from misrepresentation to outsiders. Even already familiar with ML and Ocaml, the "lazy, pure" approach read to me like a fetish, and monads seemed a tainted construct for if one absolutely must venture into the practical. The only reason I could see to learn Haskell was a sense that nevertheless comes through and probably puts some people off, that Haskell programmers are in the highest reading group. (Lisp programmers imagine that they are; one should learn both.) If one must suffer through the drudgery of using a programming language, shouldn't it be a window to enlightenment that Aldous Huxley would admire? Haskell delivers, but I avoided monads to get the "pure" experience, when in fact Haskell is all about supporting functional idioms like monads. The references cited in this thread are excellent. They certainly gave me more insight into the history of how Haskell evolved: Classes coming from Gofer precisely to make monads more elegant to use, and do notation a mutant form of monad comprehensions. I went chasing references but couldn't substantiate this statement on page 11 of 1996 Hutton, Meijer - Monadic parser combinators http://haskell.readscheme.org/servlets/cite.ss?pattern=hutton- parsers1996
Indeed, the algebraic properties required of the monad operations turn out to be precisely those required for the notation to make sense.
How do I reconcile this with the extension of do notation to support arrows? If monads and arrows are two instances of something akin to "group theory" then the definition of a "group" is lurking within whatever this quote should have said...

When you pretend you've never heard of monads or arrows, and write down the types what do you get?
this question made me wonder whether i could still recall how i used to write parsers before i heard of monads or arrows. it is difficult not to fall back into the pattern of state transformer monads, but -just for fun- here's an quick approximation of double-continuation-based parser combinators, where each parser takes a success and a failure continuation. the success continuation takes a parse result and the remaining text, the failure continuation takes the remaining text. the basic combinators are 'litP predicate' (parsing a literal/character), '.>' (sequence of two parsers), '.|' (alternative of two parsers), '.:' and '..:' (process and combine parse results before passing them to the success continuation). '?>' ignores its first result, '#>' pairs its two results (i'm sure i didn't use as many cute combinators at the time:-). [ to those of you writing debuggers for haskell: this kind of functional programming -programming with functions- could be a good stress test for your tool ] claus ------------------------------------------------ import Data.Char infixr .>,.|,?>,#> type Parser a t = (a->String->t) -> (String->t) -> (String->t) empty s f = \cs-> s () cs eot s f = \cs-> case cs of { "" -> s '\EOT' ""; '\EOT':_ -> s '\EOT' ""; _ -> f cs } litP p s f = \cs-> case cs of { c:cs' | p c -> s c cs'; _ -> f cs } but x s f = \cs-> x (\_ _->f cs) (\_->s undefined cs) cs (a ?> b) s f = \cs->a (\ar->b s (\_->f cs)) f cs (a #> b) s f = \cs->a (\ar->b (s . ((,)ar)) (\_->f cs)) f cs (a .> b) s f = \cs->a (\ar->b (s ar) (\_->f cs)) f cs (a .| b) s f = a s (b s f) (parse .: build) s f = parse (s . build) f (parse ..: build) s f = parse ((s .) . build) f parse p = ((p .> eot) ..: const) (const . Right) Left many p = (( p .> many p ) ..: (:) ) .| ( p .: return ) digit = litP isDigit .: digitToInt digits = many digit num = digits .: (foldl ((+) . (10*)) 0) space = litP isSpace anyChar = litP (const True) nonSpace = ( but space ?> anyChar ) sep = litP (==':') field = ( many nonSpace #> many space ?> sep ?> many space ?> many nonSpace ) nonField = but field ?> many anyChar

Hi Claus. I am sympathetic with your comments regarding monads and continuations. It's interesting to note that the original I/O system in Haskell was based on streams and continuations. The continuation version had two continuations in fact -- one for success and one for failure. For example, readFile had the type: readFile :: Name -> FailCont -> StrCont -> Behaviour Here StrCont was the success continuation, which took a string (the file contents) as argument. I rather liked the flexibility that this offered -- since I/O errors were fairly common, it made sense to give success and failure equal status. The down-side of using continuations is that you have to carry them around explicitly, so one might argue that they clutter the code a bit, and that was one of the advantages of switching to monads. On the other hand, one could argue that having them explicit makes things in some way clearer. All of this is described in fair detail in the History of Haskell paper, by the way (see http://portal.acm.org/toc.cfm?id=1238844). It's worth noting that, in comparing continuation and monadic program fragments, we comment in that paper: Although these two code fragments have a somewhat imperative feel because of the way they are laid out, it was really the advent of do-notation—not monads themselves—that made Haskell programs look more like conventional imperative programs (for better or worse). This syntax seriously blurred the line between purely functional programs and imperative programs, yet was heartily adopted by the Haskell Committee. In retrospect it is worth asking whether this same (or similar) syntactic device could have been used to make stream or continuation-based I/O look more natural. Best wishes, -Paul Claus Reinke wrote:
The standard, naïve approach to monadic parsing is very nice, but inefficient. So *please read* some material based on Hutton&Meijer approach, but don't stay there, read something more modern,
since we thereby seem to have left the phase of simple answers to simple questions;-) i'd like to raise a pet issue of mine. my own first combinator parsers (inspired by Wadler's "How to replace failure by a list of successes", but adapted to a call-by-value language) were based on continuations.
..
ok, now everybody has had time to chime in with "monadic parsers are based on continuations" or "continuations are just one specific monad". so let me return to the particular issue i'm interested in: contrary to monadic parsers, those continuation-based parsers had *two* continuations, one for success, one for failure. and that seemed to be a very natural match for the problem.
for all that i like monadic programming in general, i often feel that it is biased towards handling only the success path well, by offering built-in support for a single continuation only. for instance, one can use (Either String) as a parser monad with error messages, but it isn't straightforward to express error handling into that format, preserving both success and failure- related info (such as reporting the error corresponding to the longest partially successful parse). also, negation does not seem to be an easy fit (succeed if a specific parser would not be successful at the current point; this seems to require monad-specific information, so perhaps there's a MonadNegate class missing?).
has anyone else had similar experiences with expressive limitations of monadic programming? things that one might be able to work around, but that don't feel as natural or simple as they should be? things that one hasn't been able to express at all (such as Swierstra & Duponcheel's static analysis of combinator parsers which inspired Hughes's proposal to use arrows)?
claus

Paul Hudak wrote:
readFile :: Name -> FailCont -> StrCont -> Behaviour
Here StrCont was the success continuation, which took a string (the file contents) as argument. I rather liked the flexibility that this offered -- since I/O errors were fairly common, it made sense to give success and failure equal status.
Claus Reinke wrote:
contrary to monadic parsers, those continuation-based parsers had *two* continuations, one for success, one for failure. and that seemed to be a very natural match for the problem.
In a sense, the MonadError class class Monad m => MonadError e m | m -> e where throwError :: e -> m a catchError :: m a -> (e -> m a) -> m a defines a second monad structure, with return = throwError bind = catchError To make this apparent and symmetric, let's define a class DiMonad m where returnR :: a -> m e a bindR :: m e a -> (a -> m e b) -> m e b returnL :: e -> m e a bindL :: m e a -> (e -> m e' a) -> m e' a A dimonad m e a can be thought of as a monad m e with a variable error type e . The operations with suffixed by R are the "normal" monad operations whereas the suffix L marks the "error"-catching operations. (Question: can dimonads solve the extensible-exceptions problem?) Now, the two-continuation approach always requires to pass two continuations. But here, returnL and returnR only pass the failure- or the success-continuation. To see what's happening, let's turn type TwoCont e a = (e -> R) -> (a -> R) -> R with some fixed result type R into a dimonad (for clarity, we ignore that this is only a type-synonym): instance DiMonad TwoCont where returnR x = \e a -> a x bindR m f = \e a -> m e (\x -> (f x) e a) returnL y = \e a -> e y bindL m f = \e a -> m (\y -> (f y) e a) a In the case of success-continuation, this means that the dimonad passes one and the same error continuation, i.e. one and the same exception-handler will the thrown regardless of where the failure happens inside a chain of actions composed with bindR. Likewise, error-handler and success-chain will be resumed at with a common success-chain. Of course, this is well-known behavior for IO and exceptions, and it seems that it reduces clutter quite well compared to a bare TwoCont. A final question remains: does the dimonad abstraction cover the full power of TwoCont? I mean, it still seems like there's an operation missing that supplies new left and right continuations at once. Regards, apfelmus

class Monad m => MonadError e m | m -> e where throwError :: e -> m a catchError :: m a -> (e -> m a) -> m a .. power of TwoCont? I mean, it still seems like there's an operation missing that supplies new left and right continuations at once.
i guess, instead of one DiMonad with two sets of operations, i'd prefer to have two separate Monads with the ability to connect them, so that failure in one is success in the other, but i guess we need some way to tell which is which. using MonadError to handle error continuations on top of a Monad to handle success continuation is a lot like using a MonadReader to hold error continuations on top of the base Monad. consider class (Monad m) => MonadReader r m | m -> r where ask :: m r local :: (r -> r) -> m a -> m a where r = e -> m a. then throwError = \e->ask >>= \h->h e catchError m h = local (const h) m in other words, supplying both continuations is straightforward, although syntactic sugar only supports supplying one, in line with the bias towards one of the two continuations: m `doubleBind` (s,f) = m `catchError` f >>= s i find myself doing something like this frequently, whenever i use constructs like m >>= \x-> a x `mplus` b x m >>= maybe a b m >>= either a b btw, it might be useful if throwing an error would reverse the roles of the two continuations, instead of throwing away the success continuation. then the error handler itself could either return (abandoning the original success continuation) or throw (resuming the original continuation). that would interfere with the use of throw to pass on unhandled errors, but then we could be more specific about which direction to throw. claus

apfelmus wrote:
class DiMonad m where returnR :: a -> m e a bindR :: m e a -> (a -> m e b) -> m e b
returnL :: e -> m e a bindL :: m e a -> (e -> m e' a) -> m e' a
type TwoCont e a = (e -> R) -> (a -> R) -> R
A final question remains: does the dimonad abstraction cover the full power of TwoCont? I mean, it still seems like there's an operation missing that supplies new left and right continuations at once.
I think that this missing operation is bind2 :: m e a -> (e -> m e' a') -> (a -> m e' a') -> m e' a' It executes the second or the third argument depending on whether the first argument is a failure or a success. First, bind2 can be defined for TwoCont bind2 m fe fa = \e' a' -> m (\e -> (fe e) e' a') (\a -> (fa a) e' a') Apparently, bindL and bindR can be expressed with bind2 bindL m f = bind2 m f returnR bindR m f = bind2 m returnL f The question is whether bind2 can be expressed by bindL and bindR or whether bind2 offers more than both. It turns out that bind2 can be formulated from bindL and bindR alone fmapR f m = m `bindR` returnR . f bind2 m fe fa = ((Left `fmapR` m) `bindL` (\e -> Right `fmapR` fe e)) `bindR` (\aa' -> case aa' of Left a -> fa a Right a' -> returnR a') The definitions is rather cumbersome and we omit the proof that both definitions for bind2 are the same for TwoConts. For a general proof, we'd need an axiomatic characterization of dimonads and bind2. Recast in the light of MonadError, bind2 gives rise to a combinator bind2 :: MonadError e m => m a -> (e -> m b) -> (a -> m b) -> m b that executes either failure or success path. The important point is the inequality bind2 m fe fa ≠ (m >>= fa) `catchError` fe There's no equivalent to bind2 (with a better name, of course) in the libraries at the moment. The only function that does come near bind2 in the libraries is Control.Exception.try :: IO a -> IO (Either Exception a) which can be used rather easily to implement bind2 of course. Interestingly, the existence of try shows that there is a natural isomorphism DiMonad m => m e a ≅ m () (Either e a) so that dimonads do not add anything beyond what monads can already do. (More precisely, try gives one direction. The other direction is rather obvious.) Regards, apfelmus

On Monday 02 July 2007, apfelmus wrote:
apfelmus wrote:
class DiMonad m where returnR :: a -> m e a bindR :: m e a -> (a -> m e b) -> m e b
returnL :: e -> m e a bindL :: m e a -> (e -> m e' a) -> m e' a
type TwoCont e a = (e -> R) -> (a -> R) -> R
A final question remains: does the dimonad abstraction cover the full power of TwoCont? I mean, it still seems like there's an operation missing that supplies new left and right continuations at once.
I think that this missing operation is
bind2 :: m e a -> (e -> m e' a') -> (a -> m e' a') -> m e' a'
It executes the second or the third argument depending on whether the first argument is a failure or a success.
First, bind2 can be defined for TwoCont
bind2 m fe fa = \e' a' -> m (\e -> (fe e) e' a') (\a -> (fa a) e' a')
Apparently, bindL and bindR can be expressed with bind2
bindL m f = bind2 m f returnR bindR m f = bind2 m returnL f
The question is whether bind2 can be expressed by bindL and bindR or whether bind2 offers more than both. It turns out that bind2 can be formulated from bindL and bindR alone
fmapR f m = m `bindR` returnR . f bind2 m fe fa = ((Left `fmapR` m) `bindL` (\e -> Right `fmapR` fe e)) `bindR` (\aa' -> case aa' of Left a -> fa a Right a' -> returnR a')
Exactly.
The definitions is rather cumbersome
As you point out below, try makes it easier: try a = liftM Right a `catchE` return . Left bind2 m fe fa = try a >>= either fe fa And, of course, it's not cheating, since try and either are both independently useful in their own rights. And if you want to eliminate try, it's still a one-liner: bind2 m fe fa = (liftM Right a `catchE` return . Left) >>= either fe fa is still a one-line (71 characters), and bind2 m fe fa = join (liftM fa a `catchE` return . fe) is even shorter. (Proof: by a >>= f = join (liftM f a) and natural transformations).
and we omit the proof that both definitions for bind2 are the same for TwoConts.
try a >>= either fe fa = \ ke ka -> try a ke (\ x -> either fe fa x ke ka)) = \ ke ka -> (\ ke' ka' -> a (ka' . Left) (ka' . Right)) ke (\ x -> either fe fa x ke ka) = \ ke ka -> a ((\ x -> either fe fa x ke ka) . Left) ((\ x -> either fe fa x ke ka) . Right) = \ ke ka -> a (\ x -> fe x ke ka) (\ x -> fa x ke ka) Not hard.
For a general proof, we'd need an axiomatic characterization of dimonads and bind2.
Harder :) But I think the definition of bind2 above is good enough. (Is it easier to define MonadError axiomatically in terms of throw/try than throw/catch? catch a h = try a >>= either h return ) <snip> Jonathan Cast http://sourceforge.net/projects/fid-core http://sourceforge.net/projects/fid-emacs

On Saturday 30 June 2007, Claus Reinke wrote:
The standard, naïve approach to monadic parsing is very nice, but inefficient. So *please read* some material based on Hutton&Meijer approach, but don't stay there, read something more modern,
since we thereby seem to have left the phase of simple answers to simple questions;-) i'd like to raise a pet issue of mine. my own first combinator parsers (inspired by Wadler's "How to replace failure by a list of successes", but adapted to a call-by-value language) were based on continuations.
..
ok, now everybody has had time to chime in with "monadic parsers are based on continuations" or "continuations are just one specific monad". so let me return to the particular issue i'm interested in: contrary to monadic parsers, those continuation-based parsers had *two* continuations, one for success, one for failure. and that seemed to be a very natural match for the problem.
<snip> Two-continuations is a monad too, right? newtype ContErrorT m alpha = ContErrorT { runContErrorT :: forall beta. (alpha -> m beta) -> (Exception -> m beta) -> m beta } instance Monad m => ContErrorT m where return x = ContErrorT (\ k h -> k x) a >>= f = ContErrorT (\ k h -> runContErrorT a (\ x -> runContErrorT (f x) k h) h) instance Monad m => MonadError Exception (ContError m) where throwError e = ContErroT ( \ k h -> h e) a `catchError` f = ContErrorT (\ k h -> runContErrorT a k (\ e -> runContErrorT (f e) k h)) Am I missing something really obvious here? Jonathan Cast http://sourceforge.net/projects/fid-core http://sourceforge.net/projects/fid-emacs

contrary to monadic parsers, those continuation-based parsers had *two* continuations, one for success, one for failure. and that seemed to be a very natural match for the problem.
Two-continuations is a monad too, right?
yes, but my problem is not about giving them a monadic interface, but about getting more advantages than disadvantages out of that. the initial definitions are more complicated than using continuations directly, and usually, the monadic interface forces me to interleave handling of the two paths into a single thread of specifications. and while instances of the monadic combinators can be defined to thread information for both continuations, i still have to inject that information outside the monadic interface, and make sure that it doesn't get reset to nothing just because someone calls Monad fail (via a library function or pattern-match failure), or MonadPlus mzero (via things like guard). a second problem is that some of the advantages of monadic interfaces that i tend to rely on heavily, such as pattern-match failure handling in do-notation, are hardwired to the wrong part of the monadic interface (Monad fail). this is not something a non-monadic approach would offer any help with, though..
instance Monad m => Monad (ContErrorT m) where .. instance Monad m => MonadError Exception (ContErrorT m) where ..
Am I missing something really obvious here?
interesting. i tend to use Monad/MonadPlus for sequence/ alternative, rather than Monad/MonadError, but you're right: MonadError's method types are a more natural dual to Monad's. they don't have to be limited to Exception, either. it seems i should be using that class more. actually, i am using that class, via the ErrorT transformer, but i guess i thought of MonadError only as error handling in sequences (which is the way it is usually presented) rather than as fallthrough in alternatives. it would force me to decide in advance whether i want fallthrough branches (MonadError) or collections of alternatives (MonadPlus), whereas MonadPlus allowed me to delay that decision (using MonadPlus Maybe for fallthrough or MonadPlus [] for collections, for instance). hmm, perhaps one can abstract over that decision. now, if we could replace calls to Monad fail (especially, pattern match failure in do-notation) with calls to MonadError throwError, we might actually be getting somewhere (similarly, we'd need to replace MonadPlus-based guard with a MonadError-based variant). as a start, fail msg = throwError (PatternMatchError msg) isn't too far off, but it would be lying for fail called from library functions, not to mention that it would still be limited to Strings. but i've already used my own variant of catchError to limit information loss and preserve the most specific error message, so once again, MonadError seems a natural fit. all in all, your answer only strengthens my view that monadic programming would be more symmetric if we always had two continuations, one for failure, one for success. then do-notation translation could rely on both Monad and MonadError, using throwError instead of fail (it has been suggested to use mzero, with a separate MonadZero class, but MonadError may well be the better match). perhaps there'll come a time when the monadic aspects of haskell will need redesign, similar to the numeric aspects. thanks! this emphasized view of MonadError might help. claus

Big Chris wrote:
Hey, just to say, the first few pages of this explain monads really well. Good reference :-) It's the first introduction to monads I've seen that describes monads directly, without using analogies, and manages to be both sufficiently simple and precise as to be understandable.

Gregory Propf
First post. I'm a newbie, been using Haskell for about a week and love it. Anyway, this is something I don't understand. Parsers are monadic. I can see this if the parser is reading from an input stream but if there's just a block of text can't you just have the parser call itself recursively feeding the unparsed text down the recursion and tacking nodes onto the tree as the recursions return, finally returning the whole tree to the top level caller. Seems this meets the criteria for pure functionality, same result every time with same args. myParser :: [Char] -> ParseTree
Looks as if others may be answering questions you didn't ask. It seems to me as if your definition of "monadic" is a little off. You're right: parsers (for a well-behaved grammar) *are* purely functional. Same input always gives the same output. And you're right in your understanding that calculations that aren't purely functional are handled in Haskell by a monad, specifically the IO monad. However--there are lots of monads in Haskell other than the IO monad and many of them are purely functional. Take Maybe: using the Maybe monad is just syntactic sugar for what you'd get explicitly writing out a chain of "and if this fails return Nothing for the whole calculation, but if it succeeds, then...". "Monadic" just means a calculation using a mathematical structure called a monad. All impure calculations in Haskell are monadic, but not all monadic calculations are impure. Does this answer your question? --Eric

First post. I'm a newbie, been using Haskell for about a week and love it. Anyway, this is something I don't understand. Parsers are monadic. I can see this if the parser is reading from an input stream but if there's just a block of text can't you just have the parser call itself recursively feeding the unparsed text down the recursion and tacking nodes onto the tree as the recursions return, finally returning the whole tree to the top level caller. Seems this meets the criteria for pure functionality, same result every time with same args. myParser :: [Char] -> ParseTree
Looks as if others may be answering questions you didn't ask.
or asking and answering other questions related to the subject?-) but if the original question hasn't been answered yet, consider two of the statements in that paragraph: - "feeding the unparsed text down the recursion" - "tacking nodes onto the tree as the recursions return" that is a description in terms of modifications (taking remainders of the input, adding nodes to the output). functionally, you'd just want to define the output in terms of the input, which works fine if you can tell in advance which parts of the input determine which parts of the output: parseInt intString = foldl (\n d->d+10*n) 0 $ map parseDigit intString parseDigit char | char `elem` "0123456789" = digitToInt char things get more interesting if you want to define a complex parser AB in terms of parsers A and B for parts of the input, and the only way of finding out what "the unparsed text" is that should be fed to parser B is to run parser A. now, parser A has to return its part of the tree as well as "the unparsed text", and the combined parser has to feed the leftover text from A into B. you don't have to do that monadically, but the type 'Parser = String -> (Tree,String)' is one functional way of arranging things that happens to fit the description of a state transformer monad.. claus
participants (13)
-
apfelmus
-
Big Chris
-
Claus Reinke
-
Dave Bayer
-
Eric
-
Gregory Propf
-
Hugh Perkins
-
jerzy.karczmarczuk@info.unicaen.fr
-
Jon Cast
-
Jonathan Cast
-
Malcolm Wallace
-
Paul Hudak
-
Philippa Cowderoy