Martin Odersky on "What's wrong with Monads"

Cafe, I was watching a panel on languages[0] recently and Martin Odersky (the creator of Scala) said something about Monads: "What's wrong with Monads is that if you go into a Monad you have to change your whole syntax from scratch. Every single line of your program changes if you get it in or out of a Monad. They're not polymorphic so it's really the old days of Pascal. A monomorphic type system that says 'well that's all I do' ... there's no way to abstract over things. " [0, 53:45] Thoughts? --J Arthur [0] - http://css.dzone.com/articles/you-can-write-large-programs

False!
You only have to change the parts of the program that need the effect
that the monad provides. A well designed program will likely have much
of its code in pure libraries. Think of the monadic code as a
"scripting language" that you bind your libraries together with to
make the program.
On 24 June 2012 06:31, Jonathan Geddes
Cafe, [SNIP]
"What's wrong with Monads is that if you go into a Monad you have to change your whole syntax from scratch. Every single line of your program changes if you get it in or out of a Monad. [SNIP]
Thoughts?

"What's wrong with Monads is that if you go into a Monad you have to change your whole syntax from scratch. Every single line of your program changes if you get it in or out of a Monad. They're not polymorphic so it's really the old days of Pascal. A monomorphic type system that says 'well that's all I do' ... there's no way to abstract over things. " [0, 53:45] [0] - http://css.dzone.com/articles/you-can-write-large-programs I think the context of the question is important here. Odersky is asked why provide all this elegant machinery for doing functional things while avoiding the difficult/critical parts -- the parts that deal with the effects. Odersky seems to be making three claims. * To move between functional and monadic code you have to completely rewrite the code procedurally -- its true and (IMHO) regrettable. * Monadic code is monomorphic: this appears to be seriously mistaken. Monadic functions can be as polymorphic as any non-monadic functions. (I have never wished the lambda-bound variables introduced by 'do' statements were somehow polymorphic.) * There is no way to abstract over monadic code: this also appears to be mistaken as there are plenty of ways of abstracting while writing monadic code (using the very same techniques you would use for non-monadic code). Monads allow procedural code to be expressed procedurally and functional code to be expressed functionally and the type system ensures there are no mix ups. Expressing procedural code functionally is as unnatural and error prone as expressing functional code procedurally in my experience -- that Haskell avoids compelling the programmer to do either within its strongly-typed functional framework is (IMHO) its great invention(*) and enduring strength. Maybe someday someone will devise a way writing 'effects' code in a strongly-typed functional framework that doesn't force the programmer to commit each function to being either procedural or functional -- or better yet, do away with the need to write any effects code. Perhaps it has been done already. (I don't doubt folks have claimed to have done it.) But until there is a proven language and tools that can do this, monads look (to me, at least) like the best method for attacking 'effects' within a strong functional framework. Chris (*) Monads were invented within the Haskell framework of course, after the publication of the early reports and tools.

On Sun, Jun 24, 2012 at 12:27 AM, Chris Dornan
* To move between functional and monadic code you have to completely rewrite the code procedurally -- its true and (IMHO) regrettable.
It's false. do-notation is completely optional. It merely makes it easier to extract multiple values from monadic actions, instead of the basic "one value per step" bind provides. Using join and (>>=) is just as easy as do-notation, once you understand the idiom. A monad is, first and foremost, a functor. You can get at the underlying algebra while statically ensuring relevant properties, using its functorial or applicative or monadic interfaces. For example: consider a function that parses a string into a Foo. It will have a type like parseFoo :: String -> Maybe Foo Should "useFoo" have the type useFoo :: Maybe Foo -> Maybe Bar or useFoo :: Foo -> Bar? The maybe functor/monad encapsulates the behavior of propagating possible-undefinedness, so we are entirely justified in using its functor/monad interface to simplify our types. There is no reason to redundantly have to pattern match on (Just foo) or (Nothing) when we can abstract the control structure away and do useFoo <$> (parseFoo "Foo 1") -- (<$> == fmap) or (parseFoo "Foo 1") >>= return . useFoo Expressing procedural code
functionally is as unnatural and error prone as expressing functional code procedurally in my experience -- that Haskell avoids compelling the programmer to do either within its strongly-typed functional framework is (IMHO) its great invention(*) and enduring strength.
Haskell didn't invent monads. They have been in use since Grothendieck's work in algebraic topology. At least! It is not hard to make a plausible case that the Pythagoreans had a theory of monads in mind 2500 years ago.

To move between functional and monadic code you have to completely rewrite the code procedurally
It's false. do-notation is completely optional. It merely makes it easier to extract multiple values from monadic actions, instead of the basic "one value per step" bind provides. Using join and (>>=) is just as easy as do-notation, once you understand the idiom.
Odersky's point (and mine) was about moving between monadic and functional code, not eliminating a do notation (which is indeed a fairly trivial syntactic device). To take a fake and absurd example, there is a world of difference between add:: Double -> Double-> IO Double and the stock addition operator. (Perhaps you need to be very careful about exceptions.) If you structure your program so that certain kinds of arithmetic has to be done monadically then everything that uses these operations must be written quite differently from how it would be with simple arithmetic operations. You can argue that well it's just in the types -- you pays your money and takes you choice. (Viz., if your function works with effects then it should be expressed in its type.) But then this becomes the price of doing everything in a strong functional framework. To take one counter example, the Standard ML combines functional programming and effects without forcing this reification on the programmer. I much prefer the Haskell way. But when it get criticized by a non-Haskellers I try to understand the criticism in the context it was phrased.
Haskell didn't invent monads.
Indeed, that sloppy phrasing on my part. I meant Haskell invented 'monads' as they have come to be understood in the Haskell context (a general programming device for encapsulating effects-based code in a functional programming context), not the original algebraic construct or its mathematical applications. Chris

On Sun, Jun 24, 2012 at 2:48 AM, Chris Dornan
To move between functional and monadic code you have to completely rewrite the code procedurally
It's false. do-notation is completely optional. It merely makes it easier to extract multiple values from monadic actions, instead of the basic "one value per step" bind provides. Using join and (>>=) is just as easy as do-notation, once you understand the idiom.
Odersky's point (and mine) was about moving between monadic and functional code, not eliminating a do notation (which is indeed a fairly trivial syntactic device). To take a fake and absurd example, there is a world of difference between
add:: Double -> Double-> IO Double
and the stock addition operator. (Perhaps you need to be very careful about exceptions.) If you structure your program so that certain kinds of arithmetic has to be done monadically then everything that uses these operations must be written quite differently from how it would be with simple arithmetic operations.
I sort of see where you're coming from. But I'm having a hard time seeing how this "complaint" would work with respect to Maybe and the other pure monads. In other words, I suspect the problem you're describing is particular to IO and IO-like monads. If a function isn't total, it is entirely natural to use "Maybe" to encapsulate the potential for undefinedness. I would be extremely hesitant to call mAdd :: Int -> Int -> Maybe Int "monadic" unless it was actually using a monadic interface to Maybe. Indeed, I would call it "pure" and "functional". Similarly for: (++) :: [a] -> [a] -> [a] The result types are monadic insofar as they have a free type variable, but they are also entirely pure and functional.
You can argue that well it's just in the types -- you pays your money and takes you choice. (Viz., if your function works with effects then it should be expressed in its type.)
Yes, this. If you need a list, use a list. If you need simple undefinedness, use Maybe, etc.
But then this becomes the price of doing everything in a strong functional framework. To take one counter example, the Standard ML combines functional programming and effects without forcing this reification on the programmer.
I don't know SML. How is our list "monadic" and theirs not? In particular, how is Haskell "forcing" the reification while SML does not?

On 24 June 2012 18:46, Alexander Solla
I sort of see where you're coming from. But I'm having a hard time seeing how this "complaint" would work with respect to Maybe and the other pure monads. In other words, I suspect the problem you're describing is particular to IO and IO-like monads.
Yes this problem is specific to IO-based functions. If you didn't know anything about monads yet would have written a Maybe/Either function then the types are identical to the monadic formulation, and the monadic framework in this case is just helping you to structure everything. Unless this structucture is obscuring or confusing matters (and I don't see it) its difficult to imagine any objection here.
I don't know SML. How is our list "monadic" and theirs not? In particular, how is Haskell "forcing" the reification while SML does not?
In SML you can put side-effecting computaions in 'pure' functions -- functions whose type doesn't reveal that there are side effects. In Haskell terms, every function is actually in the IO monad -- or every function is given carte blanche to use unsafePerformIO depending upon how you look at it. In semantic terms it is really a case of the former; from a programming perspecive it is more like the latter; Standard ML is strict and I am pretty sure this is only practical in a strict language. It is (IMHO) deeply horrible, and possibly justifiable before monadic I/O was invented (but not for me). I am not advocating doing this (at all) but using it to illustrate a point. In standard ML you can start doing effect-based things inside a function without having to alter its type and they type of everything that uses it, and so on. Chris

In standard ML you can start doing effect-based things inside a function without having to alter its type and they type of everything that uses it, and so on. This in turn causes a break-down in the type-system where weak type variables are introduced. We can see the pathological case for that here: http://www.cs.washington.edu/education/courses/cse505/00au/lectures/13-refs....
This makes regular things like curried function compositions unusable, because their polymorphicness goes away! Such is the result of not encapsulating side-effects. —Arlen

To move between functional and monadic code you have to completely rewrite the code procedurally
I don't like the way people segregate "pure" from "monadic". Monadic code
*is* pure: it's written with completely pure Haskell and completely pure
combinators. As Alexander said, it's really important to say that 'do' is
just syntax over abstract combinators, and that it's *only for IO/ST* that
those combinators have something special.
This is why I do disagree with tutorials who begin by showing the 'do'
notation saying "do IO with that".
If the beginner reads that and then puts his learning aside (because he was
interested to get some input about Haskell but has no time to continue for
the moment), he walks away with the false impression that you have do
blocks to do imperative stuff, and for the rest you just use normal
functions.
People are there because they've heard about "the almighty *purely
functional* Haskell", I say let's give 'em what they came for:
getLine >>= \x -> putStrLn ("Hello " ++ x ++ "!")
It's not complicated to understand the concept of an operator retrieving
the result of one operation and injecting it in the next.
Twist the syntax as you want if you think lambdas are not readable for
beginners, but still, if we begin by that, develop a little with a few more
examples and then say something like:
"But it would be very cumbersome to have to write everything like that. But
don't worry, with great purity does not come great integrism!
Haskell people have been intelligent and brought you a special syntax to
write it more legibly, *but it's actually exactly the same you're doing!*"
then the intuition that "monads are just like pure code" comes by itself.
This might be what leads people like Martin Odersky to false conclusions.
Amongst Haskellers I understand that we use this shortcut (pure VS monadic)
but as I said, towards non-Haskellers it's IMHO a bad idea to start
rightaway by making the difference between both.
Now, concerning Odersky's sentence specifically: "To move between
functional and monadic code
you have to completely rewrite the code procedurally":
I don't see how that differs from any other language: monads are just
patterns (*), if you start with one method and then switch to another then
you'll have to refactor.
I'd say monads are actually better in that respect, since they can abstract
a lot due to their inherent EDSL nature (provide just one type and its set
of operations, and then you can change everything under the hood).
(*) And because of their "pattern" nature, they have the same drawbacks
than patterns in OO: people may (and will) try to fit triangles in round
holes, i.e. to use a monad where another abtraction (or no abstraction at
all) would be better.
2012/6/24 Chris Dornan
"What's wrong with Monads is that if you go into a Monad you have to change your whole syntax from scratch. Every single line of your program changes if you get it in or out of a Monad. They're not polymorphic so it's really the old days of Pascal. A monomorphic type system that says 'well that's all I do' ... there's no way to abstract over things. " [0, 53:45]
[0] - http://css.dzone.com/articles/you-can-write-large-programs
I think the context of the question is important here. Odersky is asked why provide all this elegant machinery for doing functional things while avoiding the difficult/critical parts -- the parts that deal with the effects.
Odersky seems to be making three claims.
* To move between functional and monadic code you have to completely rewrite the code procedurally -- its true and (IMHO) regrettable.
* Monadic code is monomorphic: this appears to be seriously mistaken. Monadic functions can be as polymorphic as any non-monadic functions. (I have never wished the lambda-bound variables introduced by 'do' statements were somehow polymorphic.)
* There is no way to abstract over monadic code: this also appears to be mistaken as there are plenty of ways of abstracting while writing monadic code (using the very same techniques you would use for non-monadic code).
Monads allow procedural code to be expressed procedurally and functional code to be expressed functionally and the type system ensures there are no mix ups. Expressing procedural code functionally is as unnatural and error prone as expressing functional code procedurally in my experience -- that Haskell avoids compelling the programmer to do either within its strongly-typed functional framework is (IMHO) its great invention(*) and enduring strength.
Maybe someday someone will devise a way writing 'effects' code in a strongly-typed functional framework that doesn't force the programmer to commit each function to being either procedural or functional -- or better yet, do away with the need to write any effects code. Perhaps it has been done already. (I don't doubt folks have claimed to have done it.) But until there is a proven language and tools that can do this, monads look (to me, at least) like the best method for attacking 'effects' within a strong functional framework.
Chris
(*) Monads were invented within the Haskell framework of course, after the publication of the early reports and tools.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

do-notation is just syntax in theory but in practice it's difficult to avoid it. Try to write any OpenGL program in terms of `>>=`, `>>` and `return`. I don't like to use `do` knowing that it's just syntax sugar but sometimes it help a lot.

Here is an half-baked idea how to make monads more functional. It's too wild to be implemented in haskell. But maybe you are interested more in ideas than implementations, so let's start with monad class class Monad m where return :: a -> m a (>>=) :: m a -> (a -> m b) -> m b I think monad's methods are misleading, let's rename them class Monad m where idM :: a -> m a (*$) :: (a -> m b) -> m a -> m b We can see that `return` is a monadic identity and the `bind` is an application in disguise. So now we have two applications. It's standard `($)` and monadic `(*$)`. But they are application. Well isn't it something like `plusInt` and `plusDouble`? Maybe we can devise single class for application. Let's imagine a special class `App` class App ?? where ($) :: ??? As you can see it's defined so that we can fit monads and plain functions in this framework. Moreover if we redefine this class than whitespace is redefined automatically! So `($)` really means *white space* in haskell. `idM` is interesting too. In standard world we can safely put `id` in any expression. So when we write f = a + b we can write f = id (a + b) or even f = id ((id a) + (id b)) meaning doesn't change. So if we have special class `Id` class Id f where id :: ??? Again you can see that monads fit nicely in the type. Why do we need this class? Whenever compiler gets an type mismatch, it tries to apply method from `Id` class, if it's defined ofcourse. But we have a class called `Category`, `id` belongs to it: class Category (~>) where id :: a ~> a (>>) :: (a ~> b) -> (b ~> c) -> (a ~> c) Let's pretend that `(>>)` is reversed composition `(.)`. It's interesting to note that there is another formulation of 'Monad' class. It's called Kelisli category. class Kelisli m where idK :: a -> m a (>>) :: (a -> m b) -> (b -> m c) -> (a -> m c) Here again let's forget about monad's `(>>)` for a moment, here it's composiotion. `Kleisli` is equivalent to `Monad`. If we can define `Category` instance for `Kleisli`, so that somehow this classes become unified on type level we can define application in terms of composition like this: f $ a = (const a >> f) () And we can get application for monads (or kleislis :) ). Implications: Maybe someday you wrote a function like this: foo :: Boo -> Maybe Foo foo x = case x of 1 -> Just ... 2 -> Just ... 3 -> Just ... 4 -> Just ... 5 -> Just ... 6 -> Just ... 7 -> Just ... _ -> Nothing with `idM` rule you can skip all Just's You can use white space as monadic bind. So functional application can become monadic on demand. Just switch the types. Implementation: I've tried to unify `Category` and `Kleisli` with no luck. Here is a closest sletches: simplest sketch requires type functions :( instance Monad m => Category (\a b -> a -> m b) where ... the other one too :( class Category (~>) where type Dom (~>) :: * -> * type Cod (~>) :: * -> * id :: Dom (~>) a -> Cod (~>) a (>>) :: (Dom (~>) a ~> Cod (~>) b) -> (Dom (~>) b ~> Cod (~>) c) -> ... instances type Id a = a -- :( instance Monad m => Category (a -> m b) where type Dom (a -> m b) = Id type Cod (a -> m b) = m ...

On 6/24/12 5:41 AM, Anton Kholomiov wrote:
I think monad's methods are misleading, let's rename them
class Monad m where idM :: a -> m a (*$) :: (a -> m b) -> m a -> m b
We can see that `return` is a monadic identity and the `bind` is an application in disguise. So now we have two applications. It's standard `($)` and monadic `(*$)`. But they are application. Well isn't it something like `plusInt` and `plusDouble`? Maybe we can devise single class for application. Let's imagine a special class `App`
class App ?? where ($) :: ???
As you can see it's defined so that we can fit monads and plain functions in this framework. Moreover if we redefine this class than whitespace is redefined automatically!
The class you're looking for is Applicative. The (<*>) operator handles application of "effectful" things to "effectful" things, whereas (<$>) handles the application of non-"effectful" things to "effectful" things. This situation is interesting because it highlights the fact that there is a distinction between the meaning of whitespace between function and argument vs the meaning of whitespace between argument and argument. The desire to invoke (<$>) and (<*>) implicitly is known as idiom brackets. SHE provides this as syntactic sugar, and of course you can define your own version using iI and Ii as your "brackets".
So `($)` really means *white space* in haskell.
Not entirely so, as noted above. Though yes, ($) denotes application in just the same way as function--argument whitespace does. However, the presence of ($) as an operator is helpful because it allows us to capture the syntax of our language, as in: fmap ($x) fs zipWith ($) fs xs ... -- Live well, ~wren

The class you're looking for is Applicative. The (<*>) operator handles application of "effectful" things to "effectful" things, whereas (<$>) handles the application of non-"effectful" things to "effectful" things. This situation is interesting because it highlights the fact that there is a distinction between the meaning of whitespace between function and argument vs the meaning of whitespace between argument and argument. `Applicative` is not enough for monads. `Applicative` is like functor only for functions with many arguments. It's good for patterns: (a -> b -> c -> d) -> (m a -> m b -> m c -> m d) Monads are good for patterns (a -> b -> c -> m d) -> (m a -> m b -> m c -> m d) So I can not express it with `Applicative`. My analogy really breaks down on functions with several arguments, since as you have pointed out there are two white spaces. But I like the idea of using one sign for normal and monadic and maybe applicative applications. Anton

My pocket explanation:
While e a function gives one only value of the codomain for each element of
the domain set (and thus it can be evaluated a single time), a category is
a generalization that accept many graphs that goes from each element of the
domain to the codomain. For that matter getChar can be understood
mathematically only using cathegory theory. To discover where the chain of
graphs goes each time, it is necessary to execute the chain of sentences.
Imperative languages works "categorically" every time, so they don´t need
an special syntax for doing so. Haskell permits to discriminate functional
code from "normal" "categorical/imperative" code, so the programmer and the
compiler can make use of the mathematical properties of functions. For
example, graph reduction thank to the uniqueness of the paths.
Besides that, everything, functional or monadic is equally beatiful and
polimorphic. i don´t think that monadic code is less fine. It is
unavoidable and no less mathematical.
2012/6/25 Anton Kholomiov
The class you're looking for is Applicative. The (<*>) operator handles application of "effectful" things to "effectful" things, whereas (<$>) handles the application of non-"effectful" things to "effectful" things. This situation is interesting because it highlights the fact that there is a distinction between the meaning of whitespace between function and argument vs the meaning of whitespace between argument and argument.
`Applicative` is not enough for monads. `Applicative` is like functor only for functions with many arguments. It's good for patterns:
(a -> b -> c -> d) -> (m a -> m b -> m c -> m d)
Monads are good for patterns
(a -> b -> c -> m d) -> (m a -> m b -> m c -> m d)
So I can not express it with `Applicative`. My analogy really breaks down on functions with several arguments, since as you have pointed out there are two white spaces. But I like the idea of using one sign for normal and monadic and maybe applicative applications.
Anton
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Well,
Monads are something optional at the end. Even the IO Monad is an optional
pattern with unsafePerformIO, but we use it because one of the reasons we
love Haskell is it's ability to differentiate pure and impure functions.
But sadly this is one of the traits we love about Haskell but others
dislike about it.
Cheers,
Ernesto Rodriguez
On Mon, Jun 25, 2012 at 2:23 PM, Alberto G. Corona
My pocket explanation:
While e a function gives one only value of the codomain for each element of the domain set (and thus it can be evaluated a single time), a category is a generalization that accept many graphs that goes from each element of the domain to the codomain. For that matter getChar can be understood mathematically only using cathegory theory. To discover where the chain of graphs goes each time, it is necessary to execute the chain of sentences.
Imperative languages works "categorically" every time, so they don´t need an special syntax for doing so. Haskell permits to discriminate functional code from "normal" "categorical/imperative" code, so the programmer and the compiler can make use of the mathematical properties of functions. For example, graph reduction thank to the uniqueness of the paths.
Besides that, everything, functional or monadic is equally beatiful and polimorphic. i don´t think that monadic code is less fine. It is unavoidable and no less mathematical.
2012/6/25 Anton Kholomiov
The class you're looking for is Applicative. The (<*>) operator handles application of "effectful" things to "effectful" things, whereas (<$>) handles the application of non-"effectful" things to "effectful" things. This situation is interesting because it highlights the fact that there is a distinction between the meaning of whitespace between function and argument vs the meaning of whitespace between argument and argument.
`Applicative` is not enough for monads. `Applicative` is like functor only for functions with many arguments. It's good for patterns:
(a -> b -> c -> d) -> (m a -> m b -> m c -> m d)
Monads are good for patterns
(a -> b -> c -> m d) -> (m a -> m b -> m c -> m d)
So I can not express it with `Applicative`. My analogy really breaks down on functions with several arguments, since as you have pointed out there are two white spaces. But I like the idea of using one sign for normal and monadic and maybe applicative applications.
Anton
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Odersky is repeatedly wrong on this subject and specifically for the claim that you quote, the only response is simply "not true." On 24/06/12 15:31, Jonathan Geddes wrote:
Cafe,
I was watching a panel on languages[0] recently and Martin Odersky (the creator of Scala) said something about Monads:
"What's wrong with Monads is that if you go into a Monad you have to change your whole syntax from scratch. Every single line of your program changes if you get it in or out of a Monad. They're not polymorphic so it's really the old days of Pascal. A monomorphic type system that says 'well that's all I do' ... there's no way to abstract over things. " [0, 53:45]
Thoughts?
--J Arthur
[0] - http://css.dzone.com/articles/you-can-write-large-programs
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Tony Morris http://tmorris.net/

On 24 June 2012 22:38, Tony Morris
** Odersky is repeatedly wrong on this subject and specifically for the claim that you quote, the only response is simply "not true."
My point is this. 1. The monadic approach to effects reifies functions into those that are 'pure' and those that perform I/O -- you can tell which is which from the type. 2. If you discover deep inside a function that you need after all to perform some I/O then the type of the function changes, and the type of everything that uses it changes, all the way back to the I/O trunk. The way these changed parts fit together changes radically. There is here an in-built instability here that is not *in itself* desirable. 3. To compare, if you suddenly find you need to use a sin function deeply in a package providing pure trigonometric functions you don't have to rebuild everything. Likewise, if I discover I need to copy a file in an I/O system then this is not a big deal. Discovering (in Haskell) that you need to perfrom I/O somewhere that you thought didn't need to perform I/O is not like this. 4. This instability, is in itself regretable I think. I think it is regreatble in the way that having to debug code is regretable, or having to write code at all is regreatable (why doesn't it write itself?). Its the cost of doing business. Of couse Martin Odersky may have meant something else but this is the only way I can make sense of it. Making sense of part of what is being said and agreeing with it are quite different -- never mind agreeing with the wider point. Chris

Cafe,
I was watching a panel on languages[0] recently and Martin Odersky (the creator of Scala) said something about Monads:
"What's wrong with Monads is that if you go into a Monad you have to change your whole syntax from scratch. Every single line of your program changes if you get it in or out of a Monad. They're not polymorphic so it's really the old days of Pascal. A monomorphic type system that says 'well that's all I do' ...
Jonathan Geddes
Thoughts?
In my mind, his first sentence could be paraphrased as follows: "What's wrong with having a strongly typed mechanism for separating pure and effectful code is that pure code and effectful code are actually separate." Of course every line of your program that uses a Foo will change if you switch to IO Foo instead. If the two were indistinguishable, then you'd have an impure language instead of a pure one. Furthermore, this isn't a characteristic specific to monads, it's the same for any type constructor whether it's a monad or not. To Martin's credit, earlier in the talk (around 45:15) he makes the following statement: "What you need in the language is the ability to essentially reduce state. And potentially what Scala doesn't do, but for instance Haskell does is to control state and purity, and I think that might be the next frontier." Then at 50:15 he says, "So right now, I think...in the world of effect checking we're about where we were with Pascal in terms of static typing." It seems that his heart is in the right place, but he isn't aware of where we actually are right now, and probably doesn't fully appreciate the consequences of strongly typed purity.

Hi, MightyByte wrote:
Of course every line of your program that uses a Foo will change if you switch to IO Foo instead.
But we often have to also change lines that don't use Foo at all. For example, here is the type of binary trees of integers: data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer) A function to add up all integers in a tree: amount:: Tree -> Integer amount (Leaf x) = x amount (Branch t1 t2) = amountt1 + amountt2 All fine so far. Now, consider the following additional requirement: "If the command-line flag --multiply is set, the function amount computes the product instead of the sum." In a language with implicit side effects, it is easy to implement this. We just change the third line of the amount function to check whether to call (+) or (*). In particular, we would not touch the other two lines. How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"? (I actually see three ways of doing this in Haskell, but all have serious drawbacks and do not fully solve the problem). Here it seems not so bad just to change all three lines of the amount function, even if they are not strictly related to the semantic change we want to make. But in a real program, this situation can translate to changing thousands of lines of code in many functions just to implement a minor change to a single requirement. Tillmann

We could debate this endlessly (as is common), but I would argue that a "clean" design would make the option and alternative of multiplying explicit in its design instead of including calls to fetch command line arguments in an ad-hoc fashion everywhere. The Haskell way of encoding this would be to define an app configuration data type (say AppConfig), parse the command line arguments into it upfront in IO and then run your application either in a in a monad that's an instance of (MonadReader MyConfig) or explicitly pass the option in where needed by a function. If you've designed your application this way, adding a new command line option would cause very little -if any- refactoring. If not, in my experience it is usually a 30 minute intense refactoring campaign. I suspect there might be a way to use implicit arguments here as well, but that's something I've never felt compelled to use. This kind of separation of concerns and "pure" application design is one of the things that (I think) many people really like about Haskell. Cheers, Oz On Tuesday, June 26, 2012 at 6:19 PM, Tillmann Rendel wrote:
Hi,
MightyByte wrote:
Of course every line of your program that uses a Foo will change if you switch to IO Foo instead.
But we often have to also change lines that don't use Foo at all. For example, here is the type of binary trees of integers:
data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer)
A function to add up all integers in a tree:
amount:: Tree -> Integer amount (Leaf x) = x amount (Branch t1 t2) = amountt1 + amountt2
All fine so far. Now, consider the following additional requirement: "If the command-line flag --multiply is set, the function amount computes the product instead of the sum."
In a language with implicit side effects, it is easy to implement this. We just change the third line of the amount function to check whether to call (+) or (*). In particular, we would not touch the other two lines.
How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
(I actually see three ways of doing this in Haskell, but all have serious drawbacks and do not fully solve the problem).
Here it seems not so bad just to change all three lines of the amount function, even if they are not strictly related to the semantic change we want to make. But in a real program, this situation can translate to changing thousands of lines of code in many functions just to implement a minor change to a single requirement.
Tillmann
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org (mailto:Haskell-Cafe@haskell.org) http://www.haskell.org/mailman/listinfo/haskell-cafe

I'd also say that reading command-line flags inside a simple function like
amount is a pretty large code smell. The only case in which it isn't would
be when the codebase is so small that redesigning the Haskell to be in IO
(or switch between amountPlus and amountTimes) is negligible anyway.
On Jun 26, 2012, at 18:59, Ozgun Ataman

On Tue, Jun 26, 2012 at 3:19 PM, Tillmann Rendel
A function to add up all integers in a tree:
amount:: Tree -> Integer amount (Leaf x) = x amount (Branch t1 t2) = amountt1 + amountt2
All fine so far. Now, consider the following additional requirement: "If the command-line flag --multiply is set, the function amount computes the product instead of the sum."
How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
One option is to encode the desired behavior at the type level. By extended the data type slightly and adding a Functor instance, selecting between a product and a sum can be done using their Monoid newtypes: import Data.Monoid import System.Environment data Tree a = Leaf a | Branch (Tree a) (Tree a) instance Functor Tree where f `fmap` Leaf x = Leaf (f x) f `fmap` Branch x y = Branch (fmap f x) (fmap f y) amount :: Monoid a => Tree a -> a amount (Leaf x) = x amount (Branch t1 t2) = amount t1 <> amount t2 main :: IO () main = do args <- getArgs let val :: Tree Int val = Branch (Leaf 8) (Leaf 18) let getResult :: Tree Int -> Int getResult = case args of ["--multiply"] -> getProduct . amount . fmap Product _ -> getSum . amount . fmap Sum print . getResult $ val

2012/6/27 Tillmann Rendel
MightyByte wrote:
Of course every line of your program that uses a Foo will change if you switch to IO Foo instead.
But we often have to also change lines that don't use Foo at all. For example, here is the type of binary trees of integers:
data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer)
A function to add up all integers in a tree:
amount:: Tree -> Integer amount (Leaf x) = x amount (Branch t1 t2) = amountt1 + amountt2
All fine so far. Now, consider the following additional requirement: "If the command-line flag --multiply is set, the function amount computes the product instead of the sum."
In a language with implicit side effects, it is easy to implement this. We just change the third line of the amount function to check whether to call (+) or (*). In particular, we would not touch the other two lines.
How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
I may be missing the point here, but having worked on large code bases with a wide variety contributors before, I find it very advantageous that programmers are prevented from writing an amount function whose behaviour depends on command line arguments without at least an indication in the type. The fact that the function can not perform stuff like that is precisely the guarantee that the Haskell type gives me... Dominique

Any idea whether Martin Odersky has read this discussion? Thanks and regards, -Damodar Kulkarni On Fri, Jun 29, 2012 at 12:23 AM, Dominique Devriese < dominique.devriese@cs.kuleuven.be> wrote:
MightyByte wrote:
Of course every line of your program that uses a Foo will change if you switch to IO Foo instead.
But we often have to also change lines that don't use Foo at all. For example, here is the type of binary trees of integers:
data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer)
A function to add up all integers in a tree:
amount:: Tree -> Integer amount (Leaf x) = x amount (Branch t1 t2) = amountt1 + amountt2
All fine so far. Now, consider the following additional requirement: "If
2012/6/27 Tillmann Rendel
: the command-line flag --multiply is set, the function amount computes the product instead of the sum."
In a language with implicit side effects, it is easy to implement this. We just change the third line of the amount function to check whether to call (+) or (*). In particular, we would not touch the other two lines.
How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
I may be missing the point here, but having worked on large code bases with a wide variety contributors before, I find it very advantageous that programmers are prevented from writing an amount function whose behaviour depends on command line arguments without at least an indication in the type. The fact that the function can not perform stuff like that is precisely the guarantee that the Haskell type gives me...
Dominique
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Thu, Jun 28, 2012 at 2:53 PM, Dominique Devriese
2012/6/27 Tillmann Rendel
: How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
I may be missing the point here, but having worked on large code bases with a wide variety contributors before, I find it very advantageous that programmers are prevented from writing an amount function whose behaviour depends on command line arguments without at least an indication in the type. The fact that the function can not perform stuff like that is precisely the guarantee that the Haskell type gives me...
I don't think there's an answer that's uniformly right; it depends on whether you think of the input to the program, e.g. the environment, command-line arguments, etc. as 'constant' and in some sense, pure. The latter are constant in the sense that they never change, but they are not fixed at compile-time. Other languages effectively treat them as pure (by passing them directly to main), whereas Haskell chooses not to, which is probably the reason why getArgs has IO in its type (something that seems unintuitive at first.) That precedent supports the view that e.g. a command-line flag shouldn't affect behavior without the type reflecting it, e.g. by doing IO, but the de facto use of the unsafe IO trick means not everyone agrees.

In practice, the amount of time you have to spend testing each function, to make sure its IO doesn't trip up in some corner case, is usually greater than the amount of time a rewrite-for-IO would take. Tom On Jun 28, 2012 2:54 PM, "Dominique Devriese" < dominique.devriese@cs.kuleuven.be> wrote:
MightyByte wrote:
Of course every line of your program that uses a Foo will change if you switch to IO Foo instead.
But we often have to also change lines that don't use Foo at all. For example, here is the type of binary trees of integers:
data Tree = Leaf Integer | Branch (Tree Integer) (Tree Integer)
A function to add up all integers in a tree:
amount:: Tree -> Integer amount (Leaf x) = x amount (Branch t1 t2) = amountt1 + amountt2
All fine so far. Now, consider the following additional requirement: "If
2012/6/27 Tillmann Rendel
: the command-line flag --multiply is set, the function amount computes the product instead of the sum."
In a language with implicit side effects, it is easy to implement this. We just change the third line of the amount function to check whether to call (+) or (*). In particular, we would not touch the other two lines.
How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
I may be missing the point here, but having worked on large code bases with a wide variety contributors before, I find it very advantageous that programmers are prevented from writing an amount function whose behaviour depends on command line arguments without at least an indication in the type. The fact that the function can not perform stuff like that is precisely the guarantee that the Haskell type gives me...
Dominique
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, Jun 26, 2012 at 6:19 PM, Tillmann Rendel
How would you implement this requirement in Haskell without changing the line "amount (Leaf x) = x"?
The hflags library [http://hackage.haskell.org/package/hflags] seems to do that, however...
(I actually see three ways of doing this in Haskell, but all have serious drawbacks and do not fully solve the problem).
...it uses the unsafe IO trick [http://www.haskell.org/haskellwiki/Top_level_mutable_state], which may be one of those three ways you aren't fond of.
participants (20)
-
Alberto G. Corona
-
Alexander Solla
-
Alvaro Gutierrez
-
Anton Kholomiov
-
Arlen Cuss
-
Chris Dornan
-
damodar kulkarni
-
Dominique Devriese
-
Ernesto Rodriguez
-
Jonathan Geddes
-
MightyByte
-
Nathan Howell
-
Ozgun Ataman
-
Patrick Hurst
-
Stephen Tetley
-
Tillmann Rendel
-
Tom Murphy
-
Tony Morris
-
wren ng thornton
-
Yves Parès