Thinking about monads

= produces a series of nested functions in which all the arguments of earlier functions are available to later functions: x and y are available to "return" because
I'm getting a better grasp on monads, I think. My original problem, I think, was that I was still thinking imperatively. So when I saw this: class Monad m where (>>=) :: m a -> (a -> m b) -> m b I didn't understand the "big deal". I thought, okay so you "do" something with the function (a -> m b) and you "arrive" at m b. Now I realize that chaining a sequence of Monads via that function, a -> m b, means that a is available to any function further down the line, because it is an argument to a series of nested functions. So, doSomething = thing >>= \x -> thing2 >>= \y -> return (x,y) they are arguments of functions further up in the chain. This resembles imperative code in which any variable, once set, is available further down. Any clarifications welcome. -Mike

Michael Mossey
I'm getting a better grasp on monads, I think. My original problem, I think, was that I was still thinking imperatively. So when I saw this:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
I didn't understand the "big deal". I thought, okay so you "do" something with the function (a -> m b) and you "arrive" at m b. [...]
Think of f being a computation, in which something is missing. It takes this something through a parameter. So f is actually a function, which takes a value and results in a computation. Another intuition: f is a parametric computation. Now if c0 is a computation, then c0 >>= f is another computation built by feeding the result of c0 to f. More graphically you've plugged the result cable of c0 to the input port of f. As a real world example, consider a computation, which prints a value x: print x That x has to come from somewhere, so this should actually be a function of some value x: \x -> print x If that x should come from another computation, then (>>=) comes into play. You can pass the result of one computation to the above one. For example, if x comes from the result of getChar, you can write: getChar >>= \x -> print x or simply: getChar >>= print Now to the less specific case. Look at the type of (>>=): (>>=) :: m a -> (a -> m b) -> m b The left parameter of (>>=) is the source computation, which results in something of type a. The right parameter is a parametric computation, which results in something of type b, but depends on something of type a. The result of (>>=) is a computation constructed by feeding the result of the left computation as the parameter to the right computation. I hope, this helped. Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Ertugrul Soeylemez wrote:
Michael Mossey
wrote: I'm getting a better grasp on monads, I think. My original problem, I think, was that I was still thinking imperatively. So when I saw this:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
I didn't understand the "big deal". I thought, okay so you "do" something with the function (a -> m b) and you "arrive" at m b. [...]
Think of f being a computation, in which something is missing. It takes this something through a parameter. So f is actually a function, which takes a value and results in a computation. Another intuition: f is a parametric computation. Now if c0 is a computation, then
c0 >>= f
is another computation built by feeding the result of c0 to f. More graphically you've plugged the result cable of c0 to the input port of f.
As a real world example, consider a computation, which prints a value x:
print x
That x has to come from somewhere, so this should actually be a function of some value x:
\x -> print x
If that x should come from another computation, then (>>=) comes into play. You can pass the result of one computation to the above one. For example, if x comes from the result of getChar, you can write:
getChar >>= \x -> print x
or simply:
getChar >>= print
Well, here are my thoughts. I know what you write is the way monads are introduced in most of the texts I've seen, but to the eyes of an imperative programmer, nothing "special" is going on. Let's give an example (but replace getChar by something deterministic). When I see thing1 >>= thing2 I think to myself, this is basically the same as: (Example A) f input = result' where result = thing1 input result' = thing2 result But it's not the same, because certain problems arise. What's special about monads is the way they are used and the particular problem they are trying to solve. For example, here are some problems we need to solve: (1) how do you pass state from one function to the next in the most elegant way (avoiding the need to make complicated data types and having the ability to hide implementation details) (2) how do you deal with errors? how do you "return early" from a set of computations that have hit a wall? I confess I have not read any chapters on monads themselves, but I have finished Chapter 10 of Real World Haskell, which is mostly about motivating monads and implementing something very close to them. They use an operator they call ==>, which is nearly identical to >>=. I see one answer to (1). Something like (Example B) f input = thing1 input >>= \result -> thing2 result >>= \result' -> return (result, result') separates the idea of the state we passing "down the chain" from the results we get. I'll rewrite example (A) above, to be more explicit about what we are trying to do: (Example C) f state = (result, result') where (result, state') = thing1 state (result', state'') = thing2 result state' In example (B), the results are naturally available because they are arguments to functions, and all functions further down the chain are nested within them. Now about problem (2)? The way the >>= operator is defined, it allows "short-circuiting" any remaining functions after we get a bad result. If the state is Maybe or Either, we can define >>= such that a result of Nothing or Left causes all remaining functions to be skipped. We could do this without monads, but it would look very ugly. As a beginner, I'm not trying to lecture anyone, but putting down my thoughts so I can get feedback. I feel there's no way to "understand" monads without understanding the motivation of the problem we are trying to solve, or without seeing specific implementations. Chapter 10 of Real World Haskell provides a lot of motivation by showing early awkward attempts to solve these problems. Regards, Mike

Hello Michael,
you're basically trying to give monads a better name. Don't forget that
they are an abstract structure, which means that trying to find
intuitions or motivations is the same as trying to find real world
examples.
Monads are not a way to pass the result of some function to another
function. We've got function composition for that. Short-circuiting is
a feature of particular monads like Maybe, and as you said, you could
well do without them. In Haskell, monads are an abstract combinator
class, which brings you mainly two advantages: Generalization of
functionality and impure operations encapsulated in pure objects.
Those objects are the monadic values, which can be interpreted in a
number of ways. I like to interpret them as computations, and this is
probably the most common interpretation. Have a look at this:
x :: Maybe Integer
x = Just 3
As you already know, Maybe is a monad. Just 3 is not a value right
away. It is a computation, which results in 3. The result of
computations may have a certain structure. Maybe allows lack of a
result (the Nothing computation), which adds structure to the result.
The list type is also a monad. It adds structure in that it allows
arbitrarily many results. Monads also allow you to combine computations
(pass the result of one computation to another) in a structure-specific
manner. In the Maybe monad, this is the short-circuiting you mentioned,
if there is no result. In the list monad, each of the results is passed
and all individual results are collected in a larger result list
(non-determinism).
The big feature is that you can write code, which disregards this
structure. This is what I referred to as generalization of
functionality. Have a look at the sequence function from Control.Monad:
sequence :: [m a] -> m [a]
Its type already suggests, what it does. It takes a list of
computations and gives a computation, which results in the list of the
corresponding results. Example:
sequence [Just 3, Just 4, Just 5]
= Just [3, 4, 5]
sequence [[1,2], [3,4,5]]
= [[1,3], [1,4], [1,5], [2,3], [2,4], [2,5]]
To understand the second example, view the list monad as a way to encode
non-determinism. I've written more about this in section 11 of my
monads tutorial [1].
As said, the big advantage here is that you can write the sequence
function in a way, which completely disregards the underlying structure
implemented through the actual monad, such that you don't need to
rewrite it for each monad:
sequence [] = return []
sequence (c:cs)
= c >>= \r ->
sequence cs >>= \rs ->
return (r:rs)
or by using the mapM function:
sequence = mapM id
So Haskell monads help you to generalize functionality in the same way
as group theory and category theory help you to generalize proofs.
I hope, this helps.
Greets,
Ertugrul.
[1] http://ertes.de/articles/monads.html#section-11
Michael Mossey
Ertugrul Soeylemez wrote:
Michael Mossey
wrote: I'm getting a better grasp on monads, I think. My original problem, I think, was that I was still thinking imperatively. So when I saw this:
class Monad m where
(>>=) :: m a -> (a -> m b) -> m b
I didn't understand the "big deal". I thought, okay so you "do" something with the function (a -> m b) and you "arrive" at m b. [...]
Think of f being a computation, in which something is missing. It takes this something through a parameter. So f is actually a function, which takes a value and results in a computation. Another intuition: f is a parametric computation. Now if c0 is a computation, then
c0 >>= f
is another computation built by feeding the result of c0 to f. More graphically you've plugged the result cable of c0 to the input port of f.
As a real world example, consider a computation, which prints a value x:
print x
That x has to come from somewhere, so this should actually be a function of some value x:
\x -> print x
If that x should come from another computation, then (>>=) comes into play. You can pass the result of one computation to the above one. For example, if x comes from the result of getChar, you can write:
getChar >>= \x -> print x
or simply:
getChar >>= print
Well, here are my thoughts. I know what you write is the way monads are introduced in most of the texts I've seen, but to the eyes of an imperative programmer, nothing "special" is going on. Let's give an example (but replace getChar by something deterministic). When I see
thing1 >>= thing2
I think to myself, this is basically the same as:
(Example A) f input = result' where result = thing1 input result' = thing2 result
But it's not the same, because certain problems arise. What's special about monads is the way they are used and the particular problem they are trying to solve. For example, here are some problems we need to solve:
(1) how do you pass state from one function to the next in the most elegant way (avoiding the need to make complicated data types and having the ability to hide implementation details)
(2) how do you deal with errors? how do you "return early" from a set of computations that have hit a wall?
I confess I have not read any chapters on monads themselves, but I have finished Chapter 10 of Real World Haskell, which is mostly about motivating monads and implementing something very close to them. They use an operator they call ==>, which is nearly identical to >>=.
I see one answer to (1). Something like
(Example B) f input = thing1 input >>= \result -> thing2 result >>= \result' -> return (result, result')
separates the idea of the state we passing "down the chain" from the results we get. I'll rewrite example (A) above, to be more explicit about what we are trying to do:
(Example C) f state = (result, result') where (result, state') = thing1 state (result', state'') = thing2 result state'
In example (B), the results are naturally available because they are arguments to functions, and all functions further down the chain are nested within them.
Now about problem (2)? The way the >>= operator is defined, it allows "short-circuiting" any remaining functions after we get a bad result. If the state is Maybe or Either, we can define >>= such that a result of Nothing or Left causes all remaining functions to be skipped. We could do this without monads, but it would look very ugly.
As a beginner, I'm not trying to lecture anyone, but putting down my thoughts so I can get feedback. I feel there's no way to "understand" monads without understanding the motivation of the problem we are trying to solve, or without seeing specific implementations. Chapter 10 of Real World Haskell provides a lot of motivation by showing early awkward attempts to solve these problems.
Regards, Mike
-- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/

Ertugrul Soeylemez wrote:
Hello Michael,
you're basically trying to give monads a better name. Don't forget that they are an abstract structure, which means that trying to find intuitions or motivations is the same as trying to find real world examples.
Monads are not a way to pass the result of some function to another function. We've got function composition for that. Short-circuiting is a feature of particular monads like Maybe, and as you said, you could well do without them. In Haskell, monads are an abstract combinator class, which brings you mainly two advantages: Generalization of functionality and impure operations encapsulated in pure objects.
I think I'm starting to get this. Conventional programming languages rarely, if ever, deal with abstractions like the functor or monad. In C++ you would need templates to represent them, and templates are a lot of trouble. OO classes are the main form of abstraction I deal with.
Those objects are the monadic values, which can be interpreted in a number of ways. I like to interpret them as computations, and this is probably the most common interpretation. Have a look at this:
x :: Maybe Integer x = Just 3
I know Maybe is both a functor and a monad, and I was thinking: what's the difference? They are both wrappers on types. Then I realized, the difference is: they have different class definitions. class Functor f where fmap :: (a->b) -> f a -> f b (Note how fussy this definition would be in C++. It would be a kind of template, but would probably look a lot more complex and would require lengthy declarations.) class Monad m where a >>= b :: m a -> (a -> m b) -> m b Like functor, there are types m/f, a, and b, but they have a different organization. What about my point that the definition of the >>= operator makes it useful to preserve access to the argument variables of earlier functions in the chain, as follows? nodin = jacy >>= \finnibar -> keary >>= \cody -> return (finnibar, cody) I suppose this is a property of structuring the code this way, and this is not the only way to structure it. So I think what you are saying is that the esseence of the abstraction is in the definition. I think I need more experience to really understand why computer scientists were motivated to make that definition. After many years of OO programming, I immediately analyze a problem in terms of classes and objects... while monads and functors have barely entered my vocabulary. I took a look at your tutorial, and I'll go through it in more depth. Thanks, Mike

On Mon, Apr 13, 2009 at 03:11:42PM -0700, Michael Mossey wrote:
I know Maybe is both a functor and a monad, and I was thinking: what's the difference? They are both wrappers on types. Then I realized, the difference is: they have different class definitions.
In fact, every monad should be a functor, but not every functor is a monad. Being a monad is a much stronger condition than being a functor.
class Functor f where fmap :: (a->b) -> f a -> f b
(Note how fussy this definition would be in C++. It would be a kind of template, but would probably look a lot more complex and would require lengthy declarations.)
class Monad m where a >>= b :: m a -> (a -> m b) -> m b
Don't forget return :: a -> m a ! That's the other key method in the Monad class. (There are also >> and 'fail' but those are unimportant---the first is just a specialization of >>=, and fail is a hack). -Brent
participants (3)
-
Brent Yorgey
-
Ertugrul Soeylemez
-
Michael Mossey