
What is the practical meaning of monad laws? (M, return, >>=) is not qualified as a category-theoretical monad, if the following laws are not satisfied: 1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g) But what practical problems can unsatisfying them cause? In other words, I wonder if declaring a instance of the Monad class but not checking it for monad laws may cause any problems, except for not being qualified as a theoretical monad? Cheers, -- Deokhwan Kim

Deokhwan Kim wrote:
What is the practical meaning of monad laws?
(M, return, >>=) is not qualified as a category-theoretical monad, if the following laws are not satisfied:
1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g)
But what practical problems can unsatisfying them cause? In other words, I wonder if declaring a instance of the Monad class but not checking it for monad laws may cause any problems, except for not being qualified as a theoretical monad?
Afaiu the monad laws are needed so the compiler can do various optimizations, especially in regard to the "do" notation. Consider: g c = do if c then p else return () q Intuitively, the "else" branch of the "if" statement does nothing interesting, but we need to put something there because we can't just leave the branch empty, hence the use of (return ()), but thankfully, because of the monad laws, the compiler can apply transformations to get rid of it when it desugars the "do" notation. The above is equivalent to: g c = (if c then p else return ()) >>= (\_ -> q) which could be re-written as: g c = if c then (p >>= (\_ -> q)) else (return () >>= (\_ -> q)) which can be optimized using monad law 1) to: g c = if c then (p >>= (\_ -> q)) else (\_ -> q) () which can further be optimized to: g c = if c then (p >>= (\_ -> q)) else q so when the condition (c) is False we don't waste time doing the (return ()) action, but go straight to (q). However if your monad didn't satisfy the laws, the compiler would still assume that it did thus leading to a flawed "optimization" ie the compiler would throw your program away and substitute it for a different, unrelated, program... Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

Brian, Are you really sure Haskell compilers do that optimization? I would regard a compiler that does optimizations that are justified by laws that the compiler cannot check as broken. -- Lennart On Sep 7, 2006, at 08:50 , Brian Hulley wrote:
Deokhwan Kim wrote:
What is the practical meaning of monad laws?
(M, return, >>=) is not qualified as a category-theoretical monad, if the following laws are not satisfied:
1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g)
But what practical problems can unsatisfying them cause? In other words, I wonder if declaring a instance of the Monad class but not checking it for monad laws may cause any problems, except for not being qualified as a theoretical monad?
Afaiu the monad laws are needed so the compiler can do various optimizations, especially in regard to the "do" notation. Consider:
g c = do if c then p else return () q
Intuitively, the "else" branch of the "if" statement does nothing interesting, but we need to put something there because we can't just leave the branch empty, hence the use of (return ()), but thankfully, because of the monad laws, the compiler can apply transformations to get rid of it when it desugars the "do" notation.
The above is equivalent to:
g c = (if c then p else return ()) >>= (\_ -> q)
which could be re-written as:
g c = if c then (p >>= (\_ -> q)) else (return () >>= (\_ -> q))
which can be optimized using monad law 1) to:
g c = if c then (p >>= (\_ -> q)) else (\_ -> q) ()
which can further be optimized to:
g c = if c then (p >>= (\_ -> q)) else q
so when the condition (c) is False we don't waste time doing the (return ()) action, but go straight to (q).
However if your monad didn't satisfy the laws, the compiler would still assume that it did thus leading to a flawed "optimization" ie the compiler would throw your program away and substitute it for a different, unrelated, program...
Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us.
http://www.metamilk.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Sep 7, 2006, at 9:04 AM, Lennart Augustsson wrote:
Brian,
Are you really sure Haskell compilers do that optimization? I would regard a compiler that does optimizations that are justified by laws that the compiler cannot check as broken.
What, like list fusion? ;-) Although, more seriously, there are a number of "monads" in the standard libs that don't follow the monad laws (including IO http:// article.gmane.org/gmane.comp.lang.haskell.general/5273 !!). I can't imagine that any haskell compilers rely on these laws to do program transformations.
-- Lennart
On Sep 7, 2006, at 08:50 , Brian Hulley wrote:
Deokhwan Kim wrote:
What is the practical meaning of monad laws?
(M, return, >>=) is not qualified as a category-theoretical monad, if the following laws are not satisfied:
1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g)
But what practical problems can unsatisfying them cause? In other words, I wonder if declaring a instance of the Monad class but not checking it for monad laws may cause any problems, except for not being qualified as a theoretical monad?
Afaiu the monad laws are needed so the compiler can do various optimizations, especially in regard to the "do" notation. Consider:
g c = do if c then p else return () q
Intuitively, the "else" branch of the "if" statement does nothing interesting, but we need to put something there because we can't just leave the branch empty, hence the use of (return ()), but thankfully, because of the monad laws, the compiler can apply transformations to get rid of it when it desugars the "do" notation.
The above is equivalent to:
g c = (if c then p else return ()) >>= (\_ -> q)
which could be re-written as:
g c = if c then (p >>= (\_ -> q)) else (return () >>= (\_ -> q))
which can be optimized using monad law 1) to:
g c = if c then (p >>= (\_ -> q)) else (\_ -> q) ()
which can further be optimized to:
g c = if c then (p >>= (\_ -> q)) else q
so when the condition (c) is False we don't waste time doing the (return ()) action, but go straight to (q).
However if your monad didn't satisfy the laws, the compiler would still assume that it did thus leading to a flawed "optimization" ie the compiler would throw your program away and substitute it for a different, unrelated, program...
Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us.
Rob Dockins Speak softly and drive a Sherman tank. Laugh hard; it's a long way to the bank. -- TMBG

Lennart Augustsson wrote:
On Sep 7, 2006, at 08:50 , Brian Hulley wrote:
Deokhwan Kim wrote:
What is the practical meaning of monad laws? Afaiu the monad laws are needed so the compiler can do various optimizations, especially in regard to the "do" notation. Consider:
g c = do if c then p else return () q
which can further be optimized to:
g c = if c then (p >>= (\_ -> q)) else q
Brian,
Are you really sure Haskell compilers do that optimization? I would regard a compiler that does optimizations that are justified by laws that the compiler cannot check as broken.
I think at least GHC does, if I understand the -ddump-simpl output below properly: -- in Monad.hs module Main where import Control.Monad test :: Bool -> IO () test c = do if c then putStr "True" else return () putStrLn "Finish" main = test False ghc --make -O2 -ddump-simpl monad gives: ==================== Tidy Core ==================== Main.s :: GHC.Base.String [GlobalId] [Str: DmdType] Main.s = GHC.Base.unpackCString# "Finish" Main.main :: GHC.IOBase.IO () [GlobalId] [Arity 1 Str: DmdType L] Main.main = \ (eta_a26L :: GHC.Prim.State# GHC.Prim.RealWorld) -> case (# GHC.Prim.State# GHC.Prim.RealWorld, () #) GHC.IO.hPutStr GHC.Handle.stdout Main.s eta_a26L of wild_a26O { (# new_s_a26M, a85_a26N #) -> System.IO.lvl1 new_s_a26M } Main.s1 :: GHC.Base.String [GlobalId] [Str: DmdType] Main.s1 = GHC.Base.unpackCString# "True" Main.test :: GHC.Base.Bool -> GHC.IOBase.IO () [GlobalId] [Arity 2 Str: DmdType SL] Main.test = \ (c_a19p :: GHC.Base.Bool) (eta_s27s :: GHC.Prim.State# GHC.Prim.RealWorld) -> case (# GHC.Prim.State# GHC.Prim.RealWorld, () #) c_a19p of wild_B1 { GHC.Base.False -> Main.main eta_s27s; GHC.Base.True -> case (# GHC.Prim.State# GHC.Prim.RealWorld, () #) GHC.IO.hPutStr GHC.Handle.stdout Main.s1 eta_s27s of wild1_a27o { (# new_s_a27m, a85_a27n #) -> Main.main new_s_a27m } } So when the condition in Main.test is False, the compiler immediately executes Main.main eta_s27s which does the putStr "Finish" directly, so the return () has been optimized out. Whether this is because ghc has used the monad laws, or has applied some different optimization due to it's knowledge of the built-in IO monad etc I don't know. Even if the compiler is not itself making use of the laws, other parts of standard library code might be, and their correctness would therefore also depend on something which the compiler can't verify. It seems a pity that having gone to the trouble of ensuring a monad obeys the laws, the compiler can't make use of them. What then *is* the point of the monad laws? Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On 9/6/06, Deokhwan Kim
What is the practical meaning of monad laws?
A compiler may re-write any monadic computations, for efficiency or other reasons, under the assumption that the monad laws are valid. If the monad laws aren't valid, this re-writing produces a different function than the programmer wrote. Even though compilers aren't required to be "aware" of the monad laws, other programmers are. So, even if your compiler doesn't do any re-writing, another programmer who works on your code may essentially do the same thing. Hope this helps! --Tom Phoenix

G'day all.
Quoting Deokhwan Kim
What is the practical meaning of monad laws?
Interesting philosophical question. There will be an article on this topic in the next The Monad.Reader, so watch this space.
But what practical problems can unsatisfying them cause?
Pretty much the same as any practical problems that occur when you break invariants. What problems do you cause if you don't maintain your binary search trees in sorted order, for example? Basically, you break anything which depends on the laws. For example, breaking any of the monad laws implies breaking the functor laws (exercise: prove this), so fmap breaks. Monad transformers may depend on the underlying monad to satisfy the laws, so you break stacked transformers. Using non-conforming monads as Kleisli arrows break the arrow laws. Any and all of this may result in programs which misbehave. Cheers, Andrew Bromage

Deokhwan Kim
What is the practical meaning of monad laws?
1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g)
I offer to re-write the laws in do-notation. (Please view with a fixed-width (non-proportional) font.) 1. do { x' <- return x do { f x ; f x' == } } 2. do { x <- m == do { m ; return x } } 3. do { y <- do { x <- m do { x <- m ; f x ; do { y <- f x } == ; g y ; g y } } } do { x <- m using 3.14 ; y <- f x == ; g y } I think in this notation everyone sees the laws as plain common sense. If you do write a monad that doesn't follow some common sense, the dire consequence (practical or theoretical) is obvious. Just in case it is still not obvious to somebody... When we see a program written in a form on the LHS, we expect it to do the same thing as the corresponding RHS; and vice versa. And in practice, people do write like the lengthier LHS once in a while. First example: beginners tend to write skip_and_get = do { unused <- getLine ; line <- getLine ; return line } and it would really throw off both beginners and veterans if that did not act like (by law #2) skip_and_get = do { unused <- getLine ; getLine } Second example: Next, you go ahead to use skip_and_get: main = do { answer <- skip_and_get ; putStrLn answer } The most popular way of comprehending this program is by inlining (whether the compiler does or not is an orthogonal issue): main = do { answer <- do { unused <- getLine ; getLine } ; putStrLn answer } and applying law #3 so you can pretend it is main = do { unused <- getLine ; answer <- getLine ; putStrLn answer } Law #3 is amazingly pervasive: you have always assumed it, and you have never noticed it. (To put it into perspective, you hardly notice yourself breathing, but this only makes the practical meaning of breathing more profound, not less.) Whether compilers exploit the laws or not, you still want the laws for your own sake, just so you can avoid pulling your hair for counter-intuitive program behaviour that brittlely depends on how many redundant "return"s you insert or how you nest your do-blocks.

On Fri, 2006-09-08 at 01:43 -0400, Albert Lai wrote:
Deokhwan Kim
writes: What is the practical meaning of monad laws?
1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g)
I offer to re-write the laws in do-notation. (Please view with a fixed-width (non-proportional) font.)
I think this is an excellent explanation and quite intuitive for beginners. I've turned your post into a wiki page here: http://haskell.org/haskellwiki/Monad_Laws I hope that's ok with you (I'll remove it if it's not ok). Feel free to edit it. Duncan

Deokhwan Kim wrote:
What is the practical meaning of monad laws?
(M, return, >>=) is not qualified as a category-theoretical monad, if the following laws are not satisfied:
1. (return x) >>= f == f x 2. m >>= return == m 3. (m >>= f) >>= g == m >> (\x -> f x >>= g)
The 3. law contains a typo and must be 3. (m >>= f) >>= g == m >>= (\x -> f x >>= g)
But what practical problems can unsatisfying them cause? In other words, I wonder if declaring a instance of the Monad class but not checking it for monad laws may cause any problems, except for not being qualified as a theoretical monad?
This question is likely to be a result of an unlucky introduction to monads where they are introduced top down: "Hear ye, a monad, this is some mystic thing obeying the spiritual laws 1.,2. and 3.", isn't it? It is this way that monads get the attribute "theoretical". Asking what the practical meaning of the monad laws might be is like asking what the practical meaning of the laws for natural number addition could be: what does i) a + (b+c) == (a+b) + c mean? How can i understand ii) a + 0 == a ? What does iii) a + b == b + a signify? These question are unlikely to arise because you have an intuition of what a natural number is: a number of bullets in sack, coins in your pocket, people in the mailing-list etc. With this knowledge, you will most likely not have any problems explaining the laws i),ii),iii) to somebody else and most likely you will have not doubt about *why* they must be true. For monads, my intuition is as following: a value of type (M a) is an action, something producing a value of type a and (or by) executing a side-effect like drawing on the screen or screwing up the hard drive. With the operator >>=, I can execute such actions in a specific sequence. For the sequence, it is of course unimportant how i group my actions: i can group actions act1 and act2 first and then postpend act3, or i can group act2 and act3 first and then prepend it with act1. To simplify writing down a formular corresponding to this fact, we introduce the operator >> defined by act1 >> act2 = act1 >>= \x -> act2 which sequences actions but for simplicity discards the computed value x of type a. It is only the side-effect of act1 we are interested in. Now, the thought about grouping written does as formular is just (act1 >> act2) >> act3 == act1 >> (act2 >> act3) and this is the simplified version of law 3. Of course, we know that this is coined "associativity". The actual law 3 is just a formulation for >>= that takes proper care of the intermediate calculation result x. With return x , we can create an action which computes the value x but has absolutely no side-effects. This can also be stated in formulas, as Mr "return" explains: 1. "if i am prepended to guys doing side-effects, i give them the value x but do not take any responsibility for side-effects happening" (return x) >>= (\y -> f y) == f x 2. "if i am postponed to an action which computes a value x, i don't do any additional side-effects but just return the value i have been given" m >>= (\x -> return x) == m which is of course equivalent to m >>= return == m So to answer your question:
In other words, I wonder if declaring a instance of the Monad class but not checking it for monad laws may cause any problems, except for not being qualified as a theoretical monad? A thing you declare to be an instance of the Monad class, but one that does not fulfill the three laws above, simply does not match the intuition behind a monad. I.e. your definitions of (>>=) and (return) are most likely to be void of the intended meaning.
Regards, apfelmus
participants (10)
-
ajb@spamcop.net
-
Albert Lai
-
apfelmus@quantentunnel.de
-
Brian Hulley
-
Deokhwan Kim
-
Duncan Coutts
-
Lennart Augustsson
-
Robert Dockins
-
Stefan Monnier
-
Tom Phoenix