Explaining monads

(Better view the below in a fixed-width font!) With all the recent monad discussions, I embarked on trying to clarify my own thoughts about them, and started to think about things in terms of just /where/ extra structure is 'understood'. I think I can explain why 'a->IO b' is better than 'IO a->b'. The full title of this is really 'Explaining monads by comparison with comonads and arrows', but I didn't want to scare people off without putting in the above hook! We start with a simple single value of type 'a', and then we move into some other category, where the objects are 'thing a' instead, encapsulating some additional complexity - perhaps the possible absence or multiplicity of the value, perhaps extra state, opacity, etc. But whatever it is, the a-ness is still key for combing these objects. So lets look how they combine. Pay special attention to the f column. g f ??? g ??? f application a a->b flip ($) b monad bind m a a->m b >>= m b comonad cobind w a w a->b =>> w b arrow arr a b arr b c >>> arr a c simple application: f takes one argument, returns one result. monad: f still takes one argument, but now understands how to put things /into/ m. (Perhaps it just uses return::a->m a) comonad: f has access to the entire complexity of 'w a', understands how to get thing(s) out of it (coreturn), and distills it all down to one simple b. arrow: f has access to the entire complexity of the 'input' of arr b c, and does whatever is needed, to make the complexity of the 'output' of arr b c. Input/output may even be bidirectional! Also we can look at the job that ??? does. Remember that ??? has access to f as a function and can call it as many or as few times as it wishes. $: feeds a to f, once.
=: picks zero or more a's from 'm a', feeds each one separately to f. opens up each of the 'm b' results to combine them into one 'm b'.
=>>: feeds one or more versions of 'w a' to f, then builds a 'w b' in some way using the single b's (Perhaps inspired by 'w a')
: links g to f in some way, so they can interact in as complex a way as they like. Maybe it also contributes to that complexity.
So now perhaps we can come to some conclusion about why monads are so useful and universal as control structures. It seems that monads are somehow the /simplest/ way of adding general control structure on top of single values. Notice how in a monad, the things that are combined together, the f's, still take just one 'a' argument. So theres no extra complexity for f in understanding its input. f can, however, put some additional complexity into its output - it can fail, or return multiple values, or whatever else is possible to encode in 'm a'. Still, f can be dumb, and just pass the problem onto some other function, maybe return. The complexity of the monad lives in one place, bind, >>=. It is bind that has the choice of calling f multiple times if it feels like it, and has the job of combining the results. bind is in control. f can only give it directions by returning one 'm b' for a given a. Contrast a comonad. Each f has to cope with an entire 'w a' on input. Moreover, it can't communicate any of that complexity back to cobind - it can only return one b. cobind only has the structure of 'w a' to inspire it in how to put together 'w b', it can receive /no/ instructions from f. Arrows are of course most general. Nothing is determined except that f talks with g in some way, and the compatibility is named 'b'. So given the choice of the above, which would /you/ choose, when the nature of the problem means one-on-one application is too simple? Surely a monad is the pick of the bunch! * f gives merely directions on control to >>= * f has to understand merely one single argument. (the monad can supply extra specialized functions like get/put, if we do want some complexity in there) * All the complex control structure is handled in one place - bind. And it has all the information available to do it well. Pity the poor comonad, only really suitable for infinite sequences! Shudder at the thought of the total complexity possible using arrows, and use them only when /really/ necessary. monad is king! -- Brian_Brunswick____brian@ithil.org____Wit____Disclaimer____!Shortsig_rules!

Brian Brunswick-5 wrote:
g f ??? g ??? f
application a a->b flip ($) b monad bind m a a->m b >>= m b comonad cobind w a w a->b =>> w b arrow arr a b arr b c >>> arr a c
Fmap's missing: m a, a->b, flip fmap, m b. You might want to throw in Applicative there too: m a, m (a->b), flip (<*>), m b. Snipped: Arguments for monads being the best of the M-C-A trinity. While I can share your enthusiasm over certain insights you've discovered about monads, I think you'll find that each of those structures have their privileged place in your code. You say "monads are somehow the /simplest/ way of adding general control structure on top of single values." Well sure, for specific implicit parameters of ?simplest, ?general, and ?controlStructure. It's like saying lists are better than arrays and tuples. Monads are undoubtedly more pervasive, and that could be because there aren't as many arrow and comonad tutorials, atomic ones or otherwise. Brian Brunswick-5 wrote:
Pity the poor comonad, only really suitable for infinite sequences!
Comonads are no more (exclusively) about infinite structures than monads are (exclusively) about sequencing/imperative programming. What's so infinite about this (specialization of the so-called Product) comonad: instance Comonad ((,) Bool) where -- coreturn :: (Bool, r) -> r coreturn (b, r) = r -- cobind :: (Bool, r) -> ((Bool, r) -> s) -> (Bool, s) coextend u@(b,r) g = (b, g u) -- View this message in context: http://www.nabble.com/Explaining-monads-tf4244948.html#a12087081 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

Brian Brunswick wrote:
g f ??? g ??? f
application a a->b flip ($) b monad bind m a a->m b >>= m b comonad cobind w a w a->b =>> w b arrow arr a b arr b c >>> arr a c
Kim-Ee Yeoh wrote:
Fmap's missing: m a, a->b, flip fmap, m b. You might want to throw in Applicative there too: m a, m (a->b), flip (<*>), m b.
Here's my interpretation of the table: ---------------------------------------------------------------------- Structure | Subject | Action | Verb | Result ------------+----------+------------+------------+---------- function | a | a->b | flip ($) | b Functor | f a | a->b | <$> | f b Applicative | f a | f (a->b) | flip <*> | f b Monad | m a | a->m b | >>= | m b Comonad | w a | w a->b | =>> | w b Arrow | a b c | a c d | >>> | a b d ---------------------------------------------------------------------- Kim-Ee Yeoh wrote:
... I think you'll find that each of those structures have their privileged place in your code.
Agreed. I'm still a beginner; I'm not sure how to choose one structure over another, at least not yet. But that's because ...
Monads are undoubtedly more pervasive, and that could be because there aren't as many arrow and comonad tutorials, atomic ones or otherwise.
Moreover, Comonad isn't even in the standard libraries (Hoogle returns no results for it). When I searched for tutorials on monads, I found lots of them. In fact, I have heard that writing (yet another) monad tutorial is part of standard Haskell initiation. In my case, I have started to formulate my own idea for what a monad tutorial should be; I might attempt to write one, too. I read several monad tutorials, but I could not understand them at first. Monad transformers are even worse. I had to sit, *think*, draw diagrams, *think* some more, review several advanced tutorials, and *think* even more. I also looked up comonads, arrows, and category theory. Finally, monads (and monad transformers) started to make sense for me. I still don't understand monads just yet; but that's because my self-test for understanding something is whether I feel I could explain it to someone else and have it make sense. I suppose that in order to understand monads, I need to actually *use* some monads, and see them in action on something real (like an actual algorithm) that I can relate to. Learning feels like climbing a mountain; much time and hard work is spent on the ascent, but new understanding is achieved in the process. Once I reach the top of the mountain, the question is how to make the mountain easier to climb for the next person. My thinking is that the ultimate goal is to /invert/ the mountain, such that the people can /ride/ the mountain (which still requires some work) and gain new understanding in the process. What I obtain on the way up the mountain, future people would obtain, far more efficiently, on the way /down/ the (inverted) mountain. What I would like to see in a monad tutorial are some good real-use examples that demonstrate /just/ monads. I found it extremely helpful to see an example of monads being used to compute probabilities and drug test false-alarm rates. This application seemed to used /just/ monads, without a lot of extra programming complexity on top, and it provided a "real" example. This is the sort of thing, combined with the background of "what is a monad", that I think would make a really good tutorial. The "monadic lovers" story, in my opinion, provides an example that's too contrived and simplistic. On the other extreme, I found an example of arrows being used in a digital circuit simulator. As a tutorial on arrows, I would find this too complex because there is too much "stuff" built up on top of the arrows. The concept of an arrow is lost in the complexity of "special" classes like the ArrowCircuit class that was actually used to simulate a circuit. (Note: The circuit used in the example was trivial, moreover my background is electrical engineering with a focus on digital circuits.) I found an example of a comonad being used to simulate a cellular automaton; I found this example helpful, although it might be a little too complex. I think that what would make a truly great tutorial would be a tutorial (or a series of tutorials) that starts with a "real" area of exploration and proceeds to incrementally develop programs. Each increment would incorporate a new "bite" from the area of exploration, which would cause an increase in complexity. As the programs get complicated, the monad (or comonad, or arrow) is introduced by factoring the appropriate pattern out the code and then simplifying. The tutorial might (1) state the monad laws and say "this is what defines a monad", (2) say something like "let's look for this pattern in the code" and find the pattern very easily because the example would be designed to make the pattern fairly obvious, and (3) define a special monad and use it to simplify the code. If I think of something, I'll attempt to write it up and see what happens. Brian Brunswick wrote:
The full title of this is really 'Explaining monads by comparison with comonads and arrows' ...
Also, arrows are supposed to be more general than both monads and comonads. If I could just figure out what each structure (functor, monad, comonad, arrow) is doing, and compare and contrast them, then I probably will have made leaps of understanding. I have a sense that tells me that the table above and the data structures below probably start to scratch the surface of understanding. ---------------------------------------------------------------------- Foo a = Foo { runFoo :: a } -- Functor State s a = State { runState :: s -> (a, s) } -- Monad Context c a = Context { runContext :: (a, c) -> a } -- Comonad Thing s a b = Thing { runThing :: (a, s) -> (b, s) } -- Arrow??? ---------------------------------------------------------------------- An exceptional tutorial (it would probably have to be a series) would use a "real" area of exploration and introduce all four structures by incrementally developing programs (in the area of exploration) at just the right level of program complexity for each structure to make sense and fit into its surroundings without being either too simple or too complicated. If (or when) I ever make that leap of understanding, I might make an attempt to write that tutorial. Until then, I guess I'm still climbing the mountain now, hoping I'll get to ride the mountain later. -- Ron

On 8/10/07, Ronald Guida
Kim-Ee Yeoh wrote:
Monads are undoubtedly more pervasive, and that could be because there aren't as many arrow and comonad tutorials, atomic ones or otherwise.
Moreover, Comonad isn't even in the standard libraries (Hoogle returns no results for it).
This is probably because no one has found a compelling use case for comonadic-style programming in Haskell. There have been some interesting papers, such as "Comonadic functional attribute evaluation" by Uustalu and Vene, but nothing as compelling as Wadler's "Monads for functional programming".
In my case, I have started to formulate my own idea for what a monad tutorial should be; I might attempt to write one, too.
Be sure to read sigpfe's "You could have invented monads!" and the Wadler paper. http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf Most tutorials try to explain what a monad *is*, but these focus more on why they're a useful way to organize code. In my experience, being able to use a monad is more important than understanding the theory.
Also, arrows are supposed to be more general than both monads and comonads. If I could just figure out what each structure (functor, monad, comonad, arrow) is doing, and compare and contrast them, then I probably will have made leaps of understanding.
Arrows are more general than monads and comonads in the sense that you can create an arrow corresponding to any particular monad and comonad, but there are also arrows which correspond to neither. Functors are more general than monads and comonads simply because every monad and comonad is a functor with additional operations. (This can be obscured by the fact that the Haskell Prelude doesn't define Monad as a subclass of Functor.)

David Menendez wrote:
Be sure to read sigpfe's "You could have invented monads!" and the Wadler paper.
http://sigfpe.blogspot.com/2006/08/you-could-have-invented-monads-and.html
http://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/baastad.pdf
Most tutorials try to explain what a monad *is*, but these focus more on why they're a useful way to organize code. In my experience, being able to use a monad is more important than understanding the theory.
Hey, now I know what a monad is! What I missed in all those tutorials is that a monad is an *abstract base class*. With this realization, which I had after reading the sigfpe tutorial, everything makes sense for me. To review a basic OOP example, I can't illustrate what a Vehicle *is* in general terms. I can illustrate a Bicycle, a Car, a Boat, a Plane, a Submarine, a Hovercraft, a LARC-V, and many other examples of instances of /subclasses/ of Vehicle, but I can't find anything that's /just/ a Vehicle. In OOP, Vehicle is an abstract base class. The same thing applies for a Monad. I can look at lots and lots of instances, like Maybe, Either, List, State, Reader, Writer, IO, and lots of others, but I can't produce an example that's /just/ a Monad. Monad is an abstract base class, too. Now if I had to explain "What is a Vehicle", I would have to say "A Vehicle is a device that provides transportation." When asked for more details, I can specify the interface and provide examples of instances. Interface for Vehicle: load, unload, goto Instances of Vehicle: Bicycle, Car, Plane, Boat, etc. Given the question "What is a Monad", I would have to say "A Monad is a device for sequencing side-effects." When asked for more details, I can specify the interface and provide examples of instances. Interface for Monad: return, bind Instances of Monad: Maybe, Either, List, State, etc. What is particularly stupefying for me is that I missed the fact that Monad is "obviously" an abstract base class: Monad is declared as a type-class! Now the hard part. As far I currently know - and I could be wrong: (1) Monad, Comonad and Arrow are all abstract base classes. (2) Monad, Comonad and Arrow are all devices for sequencing side-effects. (3) Monad and Comonad are both specializations of Arrow. Given the question "What is an Arrow", I would have to say "An Arrow is a device for sequencing side-effects." When asked for more details, I can specify the interface and provide examples of instances. This leads directly to the question "What makes a Monad special, compared to an Arrow?" Right now, the clues I have are: (1) Every monad is equivalent to a Kleisli arrow. (2) The ArrowApply class is equivalent to the Monad class. I can restate the question: "What is special about ArrowApply, compared to Arrow?" I see that an arrow accepts some inputs, performs a computation, possibly with side-effects, and generates some outputs. In particular, suppose I create instances of Arrow that accept two inputs (as a pair) and produce one output. Some of these instances (i.e. ArrowApply) are special: I can provide, as the two inputs, (1) an Arrow that accepts one input and one output, and (2) an input suitable for that Arrow. The output that I get is the result of feeding input (2) to input (1) to get a result, *and* somehow combining the side-effects of both arrows. The only way an ArrowApply can combine the side effects of two arrows and still obey the Arrow laws is through an operation that takes an (arrow nested inside an arrow) and collapses it into a sigle arrow. That's exactly what "join" does for monads. So, ArrowApply is special, compared to Arrow, because ArrowApply has a join operation, but Arrow doesn't. Clear as mud! The question remains: "What is special about Monad or ArrowApply, compared to Arrow?" or "What is more general about Arrow, compared to Monad or ArrowApply?" -- Ron

On Sat, Aug 11, 2007 at 03:00:04PM -0400, Ronald Guida wrote:
The question remains: "What is special about Monad or ArrowApply, compared to Arrow?" or "What is more general about Arrow, compared to Monad or ArrowApply?"
If all you have is an Arrow, then you must make up your mind what you're going to do ahead of time. ArrowChoice gives you the ability to make basic "yes or no" decisions at run time. ArrowApply gives you arbitrary computed jumps. Sometimes it's useful to restrict what you can do. Duponcheel and Swierstra wrote a parser library as an Arrow; because it forced you to decide the parsing structure ahead of time, their library had enough information to build LR-type parsing tables, and thus to do yacc-style error correction. (Nobody found it useful enough to use, but it still makes a nice demonstration of why plain Arrow is sometimes better.) Conversely, some applications *need* the extra freedom. Imagine trying to write an interpreter for a toy language with I/O, and IO is a plain Arrow and not a Monad. You can read input and parse it, but you can't actually do IO because the IO you need to do, depends on the input you read - precisely what Arrow forbids! Stefan

Stefan O'Rear wrote:
On Sat, Aug 11, 2007 at 03:00:04PM -0400, Ronald Guida wrote:
The question remains: "What is special about Monad or ArrowApply, compared to Arrow?" or "What is more general about Arrow, compared to Monad or ArrowApply?"
If all you have is an Arrow, then you must make up your mind what you're going to do ahead of time.
ArrowChoice gives you the ability to make basic "yes or no" decisions at run time.
ArrowApply gives you arbitrary computed jumps.
OK, so I thought this through. To summarize the classes of arrows: 1. Arrow is a device for sequencing side-effects in a fixed order.
If all you have is an Arrow, then you must make up your mind what you're going to do ahead of time.
2. ArrowChoice is a device for sequencing side-effects in a fixed order with options. Some effects can be selected at run-time, but (1) the available choices are fixed at compile-time and (2) options have to be selected before running the arrow computation.
ArrowChoice gives you the ability to make basic "yes or no" decisions at run time. BUT, you have to make these decisions before you run the arrow computation.
3. ArrowApply is a device for sequencing side-effects, such that functions can dynamically choose side-effects at run-time based on intermediate results of arrow computations.
ArrowApply gives you arbitrary computed jumps.
--- We know that ArrowApply is equivalent to Monad.
Imagine trying to write an interpreter for a toy language with I/O, and IO is a plain Arrow and not a Monad. You can read input and parse it, but you can't actually do IO because the IO you need to do, depends on the input you read - precisely what Arrow forbids!
Here's a toy language, described by a regular expression: 0(10)*110 I want to read characters, one at a time, and eventually decide to "Accept" or "Reject" a string. Let me try to understand my options. * With a simple Arrow, I can create a fixed sequence of "read" operations, and I can't act on the results (i.e. by choosing whether or not to keep reading) at run-time. * With ArrowChoice, I can create a set of alternative paths for "read" operations, and I can choose a path at run-time, but I have to choose a fixed path /before/ I get to see any results. * With ArrowApply, I can "read" one character and act on that character to choose what actions to perform next. I can read characters until I can render an "Accept" or "Reject" decision. Clearly, I need ArrowApply (or Monad) to get the job done. In conclusion: "What is special about Monad or ArrowApply, compared to Arrow?" Arrow lets me sequence side-effects in a fixed order. Monad lets me dynamically choose side effects at run-time based on intermediate results of previous side-effects. -- Ron

Ronald Guida wrote:
Here's a toy language, described by a regular expression: 0(10)*110
I want to read characters, one at a time, and eventually decide to "Accept" or "Reject" a string.
Let me try to understand my options.
* With a simple Arrow, I can create a fixed sequence of "read" operations, and I can't act on the results (i.e. by choosing whether or not to keep reading) at run-time.
Nothing stops your Arrow-based RegExp-library from defining suitable combinators to implement choices in RegExp's without using ArrowChoice or ArrowApply. But if your Arrow-type is abstract, the users of your library can only use the combinators you provided, so you can safely assume they do only RegExp parsing, and optimize your Arrows in the runRegExp-function for the special case of RegExp-matching. But if you decide to expose ArrowApply-functionality to the users of your RegExp-library, they are able to define arbitrary string matching on top of your RegExp library, so you can't do any optimizations because you never know what your users decided to do. From a software engineering point of view, the idea of Arrow-and-only-Arrow is to encode the whole computation in the internal structure of the arrow, independent of the interpreting language Haskell. This internal structure could be as expressible as Haskell. In contrast, ArrowApply and Monad use regular Haskell expressions for everything Haskell can do (like if-then-else, recursion, ...) and only encode special operations into the internal structure (like access to state, nondeterminism, ...). This distinction is reflected in the treatment of variables in arrow-based vs. monadic code. monadic code can use normal Haskell variables, arrow-based code has to keep the variables "inside" the arrow in some structure. Tillmann

Tillmann Rendel wrote:
Ronald Guida wrote:
Here's a toy language, described by a regular expression: 0(10)*110
I want to read characters, one at a time, and eventually decide to "Accept" or "Reject" a string.
Let me try to understand my options.
* With a simple Arrow, I can create a fixed sequence of "read" operations, and I can't act on the results (i.e. by choosing whether or not to keep reading) at run-time.
Nothing stops your Arrow-based RegExp-library from defining suitable combinators to implement choices in RegExp's without using ArrowChoice or ArrowApply. But if your Arrow-type is abstract, the users of your library can only use the combinators you provided, so you can safely assume they do only RegExp parsing, and optimize your Arrows in the runRegExp-function for the special case of RegExp-matching.
So it seems I was thinking too narrowly of arrows... If I think of an arrow as defining a Domain Specific Embedded Language (DSEL), then with a plain arrow, users can't embed Haskel inside the DSEL.
But if you decide to expose ArrowApply-functionality to the users of your RegExp-library, they are able to define arbitrary string matching on top of your RegExp library, so you can't do any optimizations because you never know what your users decided to do.
If I think of a monad (ArrowApply) as defining a Domain Specific Embedded Language (DSEL), then users can embed anything they want within the DSEL.
From a software engineering point of view, the idea of Arrow-and-only-Arrow is to encode the whole computation in the internal structure of the arrow, independent of the interpreting language Haskell. This internal structure could be as expressible as Haskell. In contrast, ArrowApply and Monad use regular Haskell expressions for everything Haskell can do (like if-then-else, recursion, ...) and only encode special operations into the internal structure (like access to state, nondeterminism, ...).
This distinction is reflected in the treatment of variables in arrow-based vs. monadic code. monadic code can use normal Haskell variables, arrow-based code has to keep the variables "inside" the arrow in some structure.
So if I want to explain arrows and monads as concisely as possible: Arrows and monads are abstract data types used to construct Domain Specific Embedded Languages (DSELs) within Haskel. A simple arrow provides a closed DSEL. A monad is a special type of arrow that creates an open DSEL by allowing users to embed arbitrary Haskel within it. -- Ron

Ronald Guida wrote:
Given the question "What is a Monad", I would have to say "A Monad is a device for sequencing side-effects."
There are side-effects and there are side-effects. If the only monad you use is Maybe, the only side-effect you get is a slight warming of the CPU. Dave Menendez pointed to that fine Wadler link earlier. Please read it. To wit, in Section 2: "Explaining Monads" the "essence of an algorithm can become buried under the plumbing required to carry data from its point of creation to its point of use." Monads can help keep the clarity of your code untrammelled by providing implicit plumbing, "side-channels" if you prefer, when data is moved around. In fact if you follow Wadler all the way to his monadic expression evaluator, you see that you could modularize your code in awesomely cool ways. You get to see how the kernel of the expression evaluator could be written for a generic monad and compiled once-and-for-all. Any additional feature (the "variations") is coded by enriching the monad. Monads are powerful devices for modularizing code. Available for free. Only in Haskell (thanks to type classes!). Get them today. "Side-effects" is a piece of linguistic cruft played fast-and-loose by too many people in this game. "Sequencing" suffers the same disease. -- View this message in context: http://www.nabble.com/Explaining-monads-tf4244948.html#a12126170 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

When I read "side-effects", I understand it as "unwanted effects", like "aliasing", and "effects depending on the order of execution". I'm not sure if my understanding here is correct. I hope Haskell does not allow "side-effects" but only "effects", meaning the monads do not allow you to write the typical ill-behaving code you get when doing real imperative programming, enforcing a single wiring of execution, not allowing the capture of the RealWorld object. In Concurrent Clean special compiler support is present to enforce "uniqueness typing", and in Haskell special compiler support is available to make the RealWorld object not available at runtime (so e.g. you can't put the RealWorld object in a list). Is this correct? BTW: What is the correct word in Haskell for "object"? I mean the (lazy) value you get when evaluating a data constructor? -----Original Message----- From: haskell-cafe-bounces@haskell.org [mailto:haskell-cafe-bounces@haskell.org] On Behalf Of Kim-Ee Yeoh Sent: Monday, August 13, 2007 15:30 To: haskell-cafe@haskell.org Subject: Re: [Haskell-cafe] Explaining monads Ronald Guida wrote:
Given the question "What is a Monad", I would have to say "A Monad is a device for sequencing side-effects."
There are side-effects and there are side-effects. If the only monad you use is Maybe, the only side-effect you get is a slight warming of the CPU. Dave Menendez pointed to that fine Wadler link earlier. Please read it. To wit, in Section 2: "Explaining Monads" the "essence of an algorithm can become buried under the plumbing required to carry data from its point of creation to its point of use." Monads can help keep the clarity of your code untrammelled by providing implicit plumbing, "side-channels" if you prefer, when data is moved around. In fact if you follow Wadler all the way to his monadic expression evaluator, you see that you could modularize your code in awesomely cool ways. You get to see how the kernel of the expression evaluator could be written for a generic monad and compiled once-and-for-all. Any additional feature (the "variations") is coded by enriching the monad. Monads are powerful devices for modularizing code. Available for free. Only in Haskell (thanks to type classes!). Get them today. "Side-effects" is a piece of linguistic cruft played fast-and-loose by too many people in this game. "Sequencing" suffers the same disease. -- View this message in context: http://www.nabble.com/Explaining-monads-tf4244948.html#a12126170 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe No virus found in this incoming message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.11.15/949 - Release Date: 12/08/2007 11:03 No virus found in this outgoing message. Checked by AVG Free Edition. Version: 7.5.476 / Virus Database: 269.11.15/949 - Release Date: 12/08/2007 11:03

On Mon, 2007-08-13 at 19:31 +0200, peterv wrote:
When I read "side-effects", I understand it as "unwanted effects", like "aliasing", and "effects depending on the order of execution". I'm not sure if my understanding here is correct. I hope Haskell does not allow "side-effects" but only "effects", meaning the monads do not allow you to write the typical ill-behaving code you get when doing real imperative programming, enforcing a single wiring of execution, not allowing the capture of the RealWorld object. In Concurrent Clean special compiler support is present to enforce "uniqueness typing", and in Haskell special compiler support is available to make the RealWorld object not available at runtime (so e.g. you can't put the RealWorld object in a list). Is this correct?
Monads certainly do allow "side-effects" in that way. Heck, without allowing aliasing there wouldn't be much to compel the use of mutability. Aliasing is the great boon and great curse of imperative programming. Order of execution issues are tricky. On the one hand, you must always (directly or indirectly) specify the order of execution, so technically there's no uncertainty. On the other hand, there are two distinct definitions of liftM2, liftM2'1 f ma mb = do a <- ma; b <- mb; return (f a b) -- and liftM2'2 f ma mb = do b <- mb; a <- ma; return (f a b) Note that order of -evaluation- is moot (at least as much as it is for any term). Laziness does not come into it (unless you use lazy IO, in which case you get what you deserve.)
BTW: What is the correct word in Haskell for "object"? I mean the (lazy) value you get when evaluating a data constructor?
What you said doesn't really make much sense, but I think the(/a) word you want is a "thunk".

David Menendez wrote:
This is probably because no one has found a compelling use case for comonadic-style programming in Haskell. There have been some interesting papers, such as "Comonadic functional attribute evaluation" by Uustalu and Vene, but nothing as compelling as Wadler's "Monads for functional programming".
That same "Comonadic" paper describes how every zipper is a comonad. I bet if we found more examples of zippers, comonads would be in business. Much as I respect Huet 1997, more zipper tutorials wouldn't harm either. -- View this message in context: http://www.nabble.com/Explaining-monads-tf4244948.html#a12117211 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

ronguida:
Monads are undoubtedly more pervasive, and that could be because there aren't as many arrow and comonad tutorials, atomic ones or otherwise.
Moreover, Comonad isn't even in the standard libraries (Hoogle returns no results for it).
When I searched for tutorials on monads, I found lots of them. In fact, I have heard that writing (yet another) monad tutorial is part of standard Haskell initiation.
Regarding the available tutorials, I've collected them under each flavour here: haskell.org/haskellwiki/Blog_articles/Monads Including 42 monad tutorials, 6 arrow tutorials, and 3 comonad tutorials, and 0 on applicative functors. The source for the `standard' (only) comonad library is available from one of the sigfpe tutorials, but really should be on hackage. -- Don

Ronald Guida wrote:
Here's my interpretation of the table:
---------------------------------------------------------------------- Structure | Subject | Action | Verb | Result ------------+----------+------------+------------+---------- function | a | a->b | flip ($) | b Functor | f a | a->b | <$> | f b Applicative | f a | f (a->b) | flip <*> | f b Monad | m a | a->m b | >>= | m b Comonad | w a | w a->b | =>> | w b Arrow | a b c | a c d | >>> | a b d ----------------------------------------------------------------------
This is the first time I've seen <$>. What a pleasing synonym for fmap. To maintain a similar order of parameters, I think you'd want "flip <$>" up there. Ronald Guida wrote:
---------------------------------------------------------------------- Foo a = Foo { runFoo :: a } -- Functor State s a = State { runState :: s -> (a, s) } -- Monad Context c a = Context { runContext :: (a, c) -> a } -- Comonad Thing s a b = Thing { runThing :: (a, s) -> (b, s) } -- Arrow??? ----------------------------------------------------------------------
I believe there is a way to see all of the above as forms of "generalized application" although not in the way presented here. (I thank Brian for pointing to the possibility of a unified treatment in the first place.) A point in common among your run* functions is that you can always observe the instance of type (a) in (m a :: Monad a). But monads are equipped with (return :: a -> m a), not (coreturn :: m a -> a). Your 2nd table is more about comonads, at least the coreturn aspect, than about whatever unity you're hoping to achieve. There's more about coreturn here: http://www.haskell.org/pipermail/haskell-cafe/2007-August/030153.html So we need to explore "generalized application" without being too explicit about "unwrapping" monads nor "wrapping" comonads. Why? Because being too explicit invariably locks us into one particular instantiation and the generalization goes poof! Think of all the otherwise rich and creative papers written on the IO monad and its innards. The instances of monads I've seen are in themselves too sexy (probabilistic modelling! digital circuit simulation! IO! concurrency!) that they end up overwhelming the essence of monadism pregnant within. Better to start with something boring. Like the Maybe monad. Feck heh, "The essence of monadism pregnant within." Another fine title for another monad tutorial. -- View this message in context: http://www.nabble.com/Explaining-monads-tf4244948.html#a12103564 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

On 11/08/07, Ronald Guida
Here's my interpretation of the table:
---------------------------------------------------------------------- Structure | Subject | Action | Verb | Result ------------+----------+------------+------------+---------- function | a | a->b | flip ($) | b Functor | f a | a->b | <$> | f b Applicative | f a | f (a->b) | flip <*> | f b Monad | m a | a->m b | >>= | m b Comonad | w a | w a->b | =>> | w b Arrow | a b c | a c d | >>> | a b d ----------------------------------------------------------------------
Nice Kim-Ee Yeoh wrote:
... I think you'll find that each of those structures have their privileged place in your code.
Agreed. I'm still a beginner; I'm not sure how to choose one structure over another, at least not yet. But that's because ...
Monads are undoubtedly more pervasive, and that could be because there aren't as many arrow and comonad tutorials, atomic ones or otherwise.
And I'm trying to say that these shouldn't be separate tutorials at all - its much more instructive to compare and contrast. Moreover, Comonad isn't even in the standard libraries (Hoogle returns
no results for it).
When I searched for tutorials on monads, I found lots of them. In fact, I have heard that writing (yet another) monad tutorial is part of standard Haskell initiation.
Thats what I was doing above :-) .... One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture. /All/ of the above structures are about combining compatible things things together in a row. /None/ of them force any particular order of evaluation - that all comes from the particular instance. So its only a particular feature of IO that it sequences the side effects. Others don't - we can have a lazy State monad that just builds up big thunks. IO could be implemented as any of the above structures, and still be perfectly able to keep things in order. Indeed, uniqueness types as in clean, are arguably just the first one - function composition functor IO would be really boring - we could just perform a sequence of actions with no choices at all. (But the whole sequence could be repeated, and I guess the Structure could be nested for fixed loops) The key to the choice of IO as a Monad comes back the the argument about 'simplicity' or what ever we want to call it - I agree its rather context dependent, and indeed I was rather flippant at the end of my first message But lets look at the nature of the actual things being sequenced, the actions above. In only 3 cases are the actions simple enough to take a single /a/ argument. Function a->b; Functor a->b; Monad a->m b In function and functor, the action takes no part in the complexity, doesn't know about it. In function application the action gets used (possibly) once, in functor and monad possibly many times. Only in Monad does the action have a say in the complexity, by returning an possibly non-trivial m b. So thats the reason Monads are so handy - the combined actions are simple, but they can at least participate - influence the flow of control if we are talking about a IO-like Monad. Also, arrows are supposed to be more general than both monads and
comonads. If I could just figure out what each structure (functor, monad, comonad, arrow) is doing, and compare and contrast them, then I probably will have made leaps of understanding. I have a sense that tells me that the table above and the data structures below probably start to scratch the surface of understanding.
----------------------------------------------------------------------
Arrows are most general because they have full access to the complexity going on in the structure. Each arrow can do arbitrarily complex (possibly bidirectional) negotiation with its input, and possibly asynchronously arbitrarily complex negotiation with its output. Any sort of data can flow any way at any time, the only restriction is that for an 'Arrow a b' object, the input side uses a's and the output b's. Compare a monad - the input must be single, simple a's. All that can happen is that the function gets called multiple times. -- Brian_Brunswick____brian@ithil.org____Wit____Disclaimer____!Shortsig_rules!

Brian Brunswick wrote:
One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture.
/All/ of the above structures are about combining compatible things things together in a row. /None/ of them force any particular order of evaluation - that all comes from the particular instance. So its only a particular feature of IO that it sequences the side effects. Others don't - we can have a lazy State monad that just builds up big thunks.
I am a bit astonished. Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here. Similar with the Error Monad (i.e. Either Err, for some Err type). I won't talk about List Monad because I always had difficulty understanding the List Monad. What about State? The effect is reading/writing the state. Again, the State Monad takes care that these effects get sequenced, and again that's what you expect it to do for you. And so on... This is -- of course -- not a proof, so maybe there /are/ Monads that don't sequence (their) effects. I'd be most interested to see an example, if there is one, to bring myself nearer to the -- unattainable -- goal of full enlightenment wrt Monads. Cheers Ben

On Aug 13, 2007, at 16:29 , Benjamin Franksen wrote:
Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here.
Clearly it does, but not as a side effect of the *monad*. It's ordinary Haskell data dependencies at work here, not some mystical behavior of a monad.
What about State? The effect is reading/writing the state. Again, the State Monad takes care that these effects get sequenced, and again that's what you expect it to do for you.
No, I expect it to carry a value around for me. If I carry that value around myself instead of relying on the monad to do it for me, *the calculation still gets sequenced by the data dependencies*. -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On Mon, Aug 13, 2007 at 05:13:01PM -0400, Brandon S. Allbery KF8NH wrote:
On Aug 13, 2007, at 16:29 , Benjamin Franksen wrote:
Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here.
Clearly it does, but not as a side effect of the *monad*. It's ordinary Haskell data dependencies at work here, not some mystical behavior of a monad.
It's the *effect* of a monad, not the *side* effect. The type of >>= defines this dependency. And when you have a chain of dependencies, that is sometimes referred to as a sequence. True, it's not mystical, but it's still sequenced. Try executing: do { x <- return 2; undefined; return (x*x); } in any monad you like, and you'll find that regardless of the *data* dependencies (the return value of this monadic action is unambiguous), the undefined is evaluated *before* the value 4 is returned. -- David Roundy Department of Physics Oregon State University

David Roundy wrote:
It's the *effect* of a monad, not the *side* effect. The type of >>= defines this dependency. And when you have a chain of dependencies, that is sometimes referred to as a sequence. True, it's not mystical, but it's still sequenced.
How can a Haskell type define a data dependency? Haskell functions are non-strict, so they may choose to ignore arguments, and ignored arguments are not evaluated at all.
Try executing:
do { x <- return 2; undefined; return (x*x); }
in any monad you like, and you'll find that regardless of the *data* dependencies (the return value of this monadic action is unambiguous), the undefined is evaluated *before* the value 4 is returned.
runIdentity $ do {x <- return 2; undefined; return (x * x) } 4
Tillmann

On 13/08/07, David Roundy
Try executing:
do { x <- return 2; undefined; return (x*x); }
in any monad you like, and you'll find that regardless of the *data* dependencies (the return value of this monadic action is unambiguous), the undefined is evaluated *before* the value 4 is returned. --
Prelude> :m + Control.Monad.Identity Prelude Control.Monad.Identity> runIdentity $ do { x <- return 2; undefined; return (x*x); } Loading package mtl-1.0 ... linking ... done. 4 -- Brian_Brunswick____brian@ithil.org____Wit____Disclaimer____!Shortsig_rules!

On 8/13/07, David Roundy

Brandon S. Allbery KF8NH wrote:
On Aug 13, 2007, at 16:29 , Benjamin Franksen wrote:
Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here.
Clearly it does, but not as a side effect of the *monad*. It's ordinary Haskell data dependencies at work here, not some mystical behavior of a monad.
I can't remember claiming that Monads have any mystic abilities. In fact, these Monads are implemented in such a way that their definition /employs/ data dependencies to enforce a certain sequencing of effects. I think that is exactly the point, isn't it?
What about State? The effect is reading/writing the state. Again, the State Monad takes care that these effects get sequenced, and again that's what you expect it to do for you.
No, I expect it to carry a value around for me. If I carry that value around myself instead of relying on the monad to do it for me, *the calculation still gets sequenced by the data dependencies*.
Of course, you can unfold (itso inline) bind and return (or never use them in the first place). Again, nobody claimed Monads do the sequencing by employing Mystic Force (tm); almost all Monads can be implemented in plain Haskell, nevertheless they sequence certain effects -- You Could Have Invented Them Monads Yourself ;-) The Monad merely captures the idiom, abstracts it and ideally implements it in a library for your convenience and for the benefit of those trying to understand what your code is supposed to achieve. She reads 'StateM ...' and immediately sees 'ah, there he wants to use some data threaded along for reading and writing'. Cheers Ben

On Mon, 2007-08-13 at 22:29 +0200, Benjamin Franksen wrote:
Brian Brunswick wrote:
One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture.
/All/ of the above structures are about combining compatible things things together in a row. /None/ of them force any particular order of evaluation - that all comes from the particular instance. So its only a particular feature of IO that it sequences the side effects. Others don't - we can have a lazy State monad that just builds up big thunks.
I am a bit astonished.
Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here.
You are wrong. Proof: Let's take a simpler example: Identity. QED This also disproves David Roundy's statement that do x <- return 2; undefined; return (x*x) will hit bottom. Reader also does not sequence it's "actions". Writer is a kind of funny example. Certainly, any monad instance where (>>=) needs it's first argument to determine whether to continue, e.g. Maybe, Either, IO, Parser, Cont, List, Tree will clearly need to force it's first argument before continuing, but that's just the nature of the situation.

On Mon, 2007-08-13 at 22:29 +0200, Benjamin Franksen wrote:
Brian Brunswick wrote:
One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture.
/All/ of the above structures are about combining compatible things
Derek Elkins wrote: things
together in a row. /None/ of them force any particular order of evaluation - that all comes from the particular instance. So its only a particular feature of IO that it sequences the side effects. Others don't - we can have a lazy State monad that just builds up big thunks.
I am a bit astonished.
Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here.
You are wrong. Proof: Let's take a simpler example: Identity. QED
I don't accept this proof. Note the wording: 'Monads sequence (certain, monad specific) effects'. Identity has no effects, ergo no sequencing has to happen. I didn't claim that /all/ monadic actions are (necessarily) sequenced.
This also disproves David Roundy's statement that do x <- return 2; undefined; return (x*x) will hit bottom.
Reader also does not sequence it's "actions".
Ok, I admit defeat now ;-) Monads in general /allow/ sequencing of (certain) effects, but it is not necessary for a monad to do so.
Writer is a kind of funny example.
In which way?
Certainly, any monad instance where (>>=) needs it's first argument to determine whether to continue, e.g. Maybe, Either, IO, Parser, Cont, List, Tree will clearly need to force it's first argument before continuing, but that's just the nature of the situation.
Don't forget State, clearly it sequences actions even though it always continues; but maybe 'sequencing' is too strong a word: Just as with Reader, a sequence of reads (with no writes in between) may actually happen in any order, State imposes strict order on groups of adjacent reads and all (single) writes, correct? Ok, I think I understand where I was misled: I took the means for the end. There are many monads that impose a certain order on (some) effects; but this is done only as far as necessary to maintain X, whatever X may be, maybe X is just the monad laws? What about: The imperative way always imposes a sequencing of actions, period. Monads in Haskell (except IO which is just imperative programming) allow us to impose ordering on effects only partially, ideally only where necessary. Thanks for further enlightening me. Hey, I just said that sequencing is a means, not an end, but maybe even this is not necessarily true. I imagine a 'Sequence Monad' whose only effect would be to order evaluation... would that be possible? Or useful? Cheers Ben

On 8/13/07, Benjamin Franksen
Ok, I admit defeat now ;-) Monads in general /allow/ sequencing of (certain) effects, but it is not necessary for a monad to do so.
I'm sure I said that 300 or so posts ago :-) Here's an example that might help make some of this explicit. Suppose I have a function f :: a -> b -> c. Now suppose I have some values "in a monad": x :: m a y :: m b How do I lift f so I can apply it to x and y and get a value of type m c? Obviously I can't just write f x y. There's an obvious solution that comes to mind: do x' <- x y' <- y return $ f x' y' But there's another solution: do y' <- y x' <- x return $ f x' y' So in order to lift f I was forced to make an explicit decision about the order in which I wanted to process x and y. (I think of >>= as a kind of bottleneck that forces you to compute the arguments to f one at a time in a particular order - but there are already too many metaphors flying around so maybe you should ignore that.) Information about that decision now becomes available to the monad's implementation. If you want, you can use this information to implement 'sequencing' (whatever that means). But you're also free to ignore it. If you do ignore it then you have a commutative monad, with Reader being the best known example. It's just like ordinary monoids. The very act of writing x*y imposes an order on x and y. You can use this to define binary operators that can make use of this ordering. But the most familiar monoid operators (addition and multiplication of course) are commutative and order doesn't matter. It'd be cool to have a notation that didn't force you to put one argument or the other first. (Well, I know of one, but it's a bit wacky: http://www.lawsofform.org/interpretations.html) -- Dan

Benjamin Franksen wrote:
Brian Brunswick wrote:
One thing that I keep seeing people say (not you), is that monads /sequence/ side effects. This is wrong, or at least a limited picture.
/All/ of the above structures are about combining compatible things things together in a row.
I am a bit astonished.
Let's take the simplest example: Maybe. The effect in question is the premature abortion of a computation (when Nothing is returned). And of course Maybe sequences these effects, that's what you use it for: the _first_ action to be encountered that returns Nothing aborts the computation. Clearly sequencing goes on here.
"sequencing" is an overloaded term. Dan Weston said in an earlier thread that monads sequence denotationally, but not necessarily operationally. Brian makes the same point. I also believe that this is an important distinction to make.
I won't talk about List Monad because I always had difficulty understanding the List Monad.
Maybe that's because the distinction of denotational and operational sequencing actually matters for the list monad. I'll try to explain. Consider a >>= b >>= c This is equivalent to [()] >> a >>= b >>= c We can think of this as defining a tree with three levels: - () at the root. - 'a' produces the children of the root. - 'b' and 'c' each add another level to the tree - given a node from the previous level, they produce the children of that node. In other words, you *specify* a breadth first traversal of that tree, and you *sequence* the subsequent levels of that traversal. The catch here is lazy evaluation - each intermediate list of the breadth first traversal is produced incrementally so what you get at run time is actually a *depth first* traversal. As a result, there's no nice sequencing anymore, operationally. HTH, Bertram P.S. The explanation I gave above is deliberately simplified - it's actually only an explanation of the list *arrow*. The list monad allows the structure of the traversal to be different for various subtrees of a node. Consider [()] >> [1,2,3] >>= \n -> if odd n then [n] else [1..n] >>= \m -> [n+m] which produces the following tree: () +- n=1 | `- 1 +- n=2 | +- m=1 | | `- 3 | `- m=2 | `- 4 `- n=3 `- 3 This tree no longer has uniform levels. In my opinion the best way to think of this is operationally, as a depth first traversal.
participants (16)
-
Arie Peterson
-
Benjamin Franksen
-
Bertram Felgenhauer
-
Brandon S. Allbery KF8NH
-
Brian Brunswick
-
Brian Brunswick
-
Dan Piponi
-
David Menendez
-
David Roundy
-
Derek Elkins
-
dons@cse.unsw.edu.au
-
Kim-Ee Yeoh
-
peterv
-
Ronald Guida
-
Stefan O'Rear
-
Tillmann Rendel