
Dougal Stanton wrote:
I think I will have to, sooner or later, become more versed in the subtle ways of non-IO monads. They seem to be capable of some seriously tricksy shenanigans. Keep trying. At some point you will achieve enlightenment in a blinding flash of light. Then you will write a monad tutorial. Everybody learning Haskell goes through this.
In the hope of helping, here is *my* brief tutorial. Monads capture the pattern of "do X then Y in context C". All other programming languages have a single fixed context built in, and almost all of them use the same one. This context is the reason that the order of X and Y matters: side effects are recorded in C, so stuff that happens in X is going to affect Y. Pure functions have no context, which means that X cannot affect Y (no context to transmit information). Which is great unless you actually want to say "do X and then do Y". In order to do that you need to define a context for the side effects. Like I said earlier, most languages have exactly one context built in with no way to change it. Haskell has that context too: its called IO (as you may have guessed, these contexts are called "monads" in Haskell). The thing about Haskell is that you can substitute other contexts instead. Thats whats so great about monads. But since you have only ever programmed in languages that have the IO monad built in, the idea of replacing it with something else seems very strange. Ever programmed in Prolog? The backtracking that Prolog does is a different way of propogating side effects from one step to the next. Prolog provides a different context than most languages, and in Haskell you would describe it using a different monad. However because Prolog doesn't have monads it wasn't able to handle IO properly. Hence a backtracking computation with side effects in Prolog will repeat each side effect every time it gets reached. You could actually describe this in Haskell by defining a new monad with a backtracking rule but based on IO. Or you could have a different monad that only executes the side effects once the computation has succeeded. Each monad has two functions; "return" and ">>=" (known as "bind"). "return" says what it means to introduce some value into the context. Its really just there for the types, and in most cases its pretty boring. ">>=" describes how side effects propagate from one step to the next, so its the interesting one. The final (and really cool) thing you can do is glue a new monad together using "monad transformers" as a kit of parts. A monad transformer takes an "inner" monad as a type argument and the bind operation describes how effects in the inner monad are propagated in the outer monad. Thats a bit more advanced, but when you come to create your own monads its generally easier than building from scratch. Now I suggest going to http://en.wikibooks.org/wiki/Haskell/Understanding_monads and see if it makes any more sense. Ignore the nuclear waste metaphor if it doesn't work for you. The State monad is described about half way down, so you might think about that. State takes a type argument for the state variable, so side effects are restricted to changes in that variable. Hence "State Integer" is a monad with a state consisting of one integer. You might like to consider what could be done with random values in "State StdGen". Hope this helps. Paul.