
I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts. It seems to me that the relative performance of monadic implementations must be equivalant to imperative ones to provide a strong case for functional programming. For example, in the Exception handling monad it seems that the "bind" function must be called continously to pass the Error value back to an error handling function. However, in imperative languages we can make an immediate non-local transfer of control. Similiar is the way State is handled. Do compilers for haskell do any sort of optimization on these monadic operations or is it all as ineffecient as it looks. Eric Wohlstadter UCDavis Software Tools Lab

Eric Allen Wohlstadter
I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts. It seems to me that the relative performance of monadic implementations must be equivalant to imperative ones to provide a strong case for functional programming. For example, in the Exception handling monad it seems that the "bind" function must be called continously to pass the Error value back to an error handling function. However, in imperative languages we can make an immediate non-local transfer of control. Similiar is the way State is handled. Do compilers for haskell do any sort of optimization on these monadic operations or is it all as ineffecient as it looks.
Monads such as the IO monad are entirely compiled away by an optimising Haskell compiler. So, for example, it would use techniques similar to that of a compiler for an imperative language to actually implement exceptions. You can regard the monad as a functional interface to the imperative functionality and the "simulation-based" implementation as a specification of it's semantics, but the generated code is free to use whatever optimisation are possible as long as the same semantics is realised. Cheers, Manuel PS: These optimisations will usually only apply to monads that are defined as part of the system libraries of a Haskell system, not to user-defined ones (unless a user uses non-standard system features to implement the monad).

Eric Allen Wohlstadter
non-local transfer of control. Similiar is the way State is handled. Do compilers for haskell do any sort of optimization on these monadic operations or is it all as ineffecient as it looks.
On Wed, Jan 16, 2002 at 08:59:14PM +1100, Manuel M. T. Chakravarty wrote:
PS: These optimisations will usually only apply to monads that are defined as part of the system libraries of a Haskell system, not to user-defined ones (unless a user uses non-standard system features to implement the monad).
This piques my interest. It would be my hope that interprocedural analysis could unravel the control transfers implied by the monadic code at least up to the point where the runtime system calls are introduced, with the natural proviso that some sort of time/space tradeoff threshhold is used to throttle code bloat. Is it the case this is possible in general or are there fundamental barriers to doing this with e.g. dumps of appropriate IR bits during separate compilation? Cheers, Bill

I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts.
I think that's just bad wording. To take a rather trite point of view, in a language such as C /everything/ is done within a monad, and all types, even int, are really IO something (IO Int). In Haskell you may see costs associated with using Monads, but they are mainly to do with dealing with the difficulties that arise from having laziness everywhere. Jón -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk 31 Chalmers Road jf@cl.cam.ac.uk Cambridge CB1 3SZ +44 1223 570179 (after 14:00 only, please!)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Let me offer a differing view of Monads. Monads are a way to write type-safe imperative programs within a functional framework. It's just an advanced version of PROGN kludge in LISP. Since they are based on a linear flow of "commands", they seem to necessitate sequential programming. Because of this reason I prefer not to use any monads whenever possible. It seems to eliminate the gist of functional programming. OTOH, the nice thing about Monads is that they, I think, lift the semantics of an imperative language, to the language level. When I look at the analysis of Algol-like languages in Tennent's "Semantics of Programming Languages" book I see the same thing as Monads: a way to sequence commands, and pipe the effects. Because of these reasons, I think the use of Monads are highly exaggerated. They are sort of like the over-use of "template" code in C++. In my experience, it doesn't work too well. You will end up writing monadic versions of everything, etc. Of course, they are a nice way to abstact I/O. How else could you do it properly in a functional language? Monads effectively model the operation of a sequential machine (one I/O channel!) in a mathematical way. It's similar to the problem of trying to retrofit a parallel system in a sequential system. Consider a sequential input source, and a parallel algorithm to compute it. You will need an interface between the two. In that respect, whenever I use monads, I feel like going back to an imperative language because it will make my program much less concurrent. Wasn't the most important benefit of using a functional PL architecture independence? I think it has turned to "ease of programming" now, which isn't any more real. :) Thanks, On Wednesday 16 January 2002 17:05, you wrote:
I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts.
I think that's just bad wording. To take a rather trite point of view, in a language such as C /everything/ is done within a monad, and all types, even int, are really IO something (IO Int).
In Haskell you may see costs associated with using Monads, but they are mainly to do with dealing with the difficulties that arise from having laziness everywhere.
Jón
- --
Eray Ozkural (exa)

"Eray Ozkural (exa)" wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Let me offer a differing view of Monads.
Monads are a way to write type-safe imperative programs within a functional framework. It's just an advanced version of PROGN kludge in LISP.
Since they are based on a linear flow of "commands", they seem to necessitate sequential programming. Because of this reason I prefer not to use any monads whenever possible. It seems to eliminate the gist of functional programming.
Right. When you need an explicitly ordered thread of control, you use a monad to `drive' that thread. The rest of the program can be expressed in purely functional terms.
OTOH, the nice thing about Monads is that they, I think, lift the semantics of an imperative language, to the language level. When I look at the analysis of Algol-like languages in Tennent's "Semantics of Programming Languages" book I see the same thing as Monads: a way to sequence commands, and pipe the effects.
Because of these reasons, I think the use of Monads are highly exaggerated. They are sort of like the over-use of "template" code in C++. In my experience, it doesn't work too well. You will end up writing monadic versions of everything, etc.
Naturally, that's a danger. Anything can be used to excess. The old saying goes `you can write FORTRAN in any language. The point is to use monads to `abstract out' the imperativeness. One way to think of it is to look at a program as a partially ordered set of calculations; some calculations need to occur before others, other groups can occur in any order. In an imperative language you specify a total ordering (which is overkill). In a pure functional language (without monads) the only order you can supply is through function application (and the evaluation model -- possible, but syntactically unwieldly. Monads merely provide a syntactical convenience.
Of course, they are a nice way to abstact I/O. How else could you do it properly in a functional language? Monads effectively model the operation of a sequential machine (one I/O channel!) in a mathematical way.
It's similar to the problem of trying to retrofit a parallel system in a sequential system. Consider a sequential input source, and a parallel algorithm to compute it. You will need an interface between the two.
Actually not; the monad itself _is_ the interface. In the absence of side-effects, you do not need to be concerned with synchronization, for example. Values are generated when needed.
In that respect, whenever I use monads, I feel like going back to an imperative language because it will make my program much less concurrent. Wasn't the most important benefit of using a functional PL architecture independence? I think it has turned to "ease of programming" now, which isn't any more real. :)
Indeed, some problems are best solved by imperative means. Some algorithms cannot be expressed in a purely functional framework as efficiently as in a side-effecting one. Still, the ability to easily reason about code (or prove the correctness of code) is quite compelling. One must choose the tool appropriate for the task. Of course, not being an expert (nor really playing one on the mailing list), YMMV.
Thanks,
On Wednesday 16 January 2002 17:05, you wrote:
I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts.
I think that's just bad wording. To take a rather trite point of view, in a language such as C /everything/ is done within a monad, and all types, even int, are really IO something (IO Int).
In Haskell you may see costs associated with using Monads, but they are mainly to do with dealing with the difficulties that arise from having laziness everywhere.
Jón
- -- Eray Ozkural (exa)
Comp. Sci. Dept., Bilkent University, Ankara www: http://www.cs.bilkent.edu.tr/~erayo GPG public key fingerprint: 360C 852F 88B0 A745 F31B EA0F 7C07 AE16 874D 539C -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.0.6 (GNU/Linux) Comment: For info see http://www.gnupg.org iD8DBQE8RcrAfAeuFodNU5wRAtSkAJ9dbVF4q33hCcwXedHDYA4MlAYv0wCeNb5p shGjRg01VUmiGYTkW1LlmvM= =mJDL -----END PGP SIGNATURE-----
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Cheers, --ag -- Artie Gold -- Austin, TX agold@bga.com "May you keep turning the pages. And may the book never end."

At 12:47 PM -0600 1/16/02, Eray Ozkural (exa) wrote:
Let me offer a differing view of Monads.
Monads are a way to write type-safe imperative programs within a functional framework. It's just an advanced version of PROGN kludge in LISP.
Since they are based on a linear flow of "commands", they seem to necessitate sequential programming. Because of this reason I prefer not to use any monads whenever possible. It seems to eliminate the gist of functional programming.
That seems a bit too strong. If you'll have a look at Hutton and Meijer's monadic parsing library (ParseLib.hs, which is distributed with Hugs and described in http://www.cs.nott.ac.uk/~gmh/pearl.pdf), you'll see an example of monadic programming which is as purely functional as you could ask for. What the monad contributes to this parsing library is a more powerful form of composition than the standard function-composition operator (.). It's ideal for combining parsers sequentially, and my undergraduate classes find it far easier to understand than the combination of (>*>) and `build` that's presented in Simon Thompson's http://www.cs.ukc.ac.uk/people/staff/sjt/craft2e/. It's true that the parsing monad does involve "sequential programming", but the sequentiality is inherent in the problem. It's bound to appear, one way or another, whenever parsers are combined sequentially. The monadic formulation's advantage is that the details of how sequentially combined parsers communicate with one another are hidden.
... whenever I use monads, I feel like going back to an imperative language because it will make my program much less concurrent.
Not necessarily. In the case of two parsers combined in sequence, there's no concurrency to begin with, but Parser also belongs to MonadPlus, which provides for nondeterministic --i.e., concurrent-- choice. --Ham ------------------------------------------------------------------ Hamilton Richards Department of Computer Sciences Senior Lecturer Mail Code C0500 512-471-9525 The University of Texas at Austin Taylor Hall 5.138 Austin, Texas 78712-1188 ham@cs.utexas.edu hrichrds@swbell.net ------------------------------------------------------------------

On Monday 21 January 2002 17:34, you wrote:
At 12:47 PM -0600 1/16/02, Eray Ozkural (exa) wrote:
Let me offer a differing view of Monads.
Monads are a way to write type-safe imperative programs within a functional framework. It's just an advanced version of PROGN kludge in LISP.
Since they are based on a linear flow of "commands", they seem to necessitate sequential programming. Because of this reason I prefer not to use any monads whenever possible. It seems to eliminate the gist of functional programming.
That seems a bit too strong.
If you'll have a look at Hutton and Meijer's monadic parsing library (ParseLib.hs, which is distributed with Hugs and described in http://www.cs.nott.ac.uk/~gmh/pearl.pdf), you'll see an example of monadic programming which is as purely functional as you could ask for.
I don't think Eray was saying that using monads resulted in programs which aren't purely functional. He was just refering to the difference between the traditional purely functional programming style vs. pseudo imperative monadic 'do' programming style. I'm inclined to agree. For IO or dealing with mutable data in general I can understand the necessity of monads, but I haven't found much use for them otherwise.
What the monad contributes to this parsing library is a more powerful form of composition than the standard function-composition operator (.). It's ideal for combining parsers sequentially, and my undergraduate classes find it far easier to understand than the combination of (>*>) and `build` that's presented in Simon Thompson's http://www.cs.ukc.ac.uk/people/staff/sjt/craft2e/. It's true that the parsing monad does involve "sequential programming", but the sequentiality is inherent in the problem. It's bound to appear, one way or another, whenever parsers are combined sequentially. The monadic formulation's advantage is that the details of how sequentially combined parsers communicate with one another are hidden.
I don't know about the book you're talking about, but I can't see any reason why you need to make your parser combinators monadic to get a more powerful (for parsers) form of composition than you get with (.). IIRC correctly the principal justification for the arrows proposal was that some sophisticated functional parsing techniques can't be made monadic? (I forget why now:) Regards -- Adrian Hey

Eric Allen Wohlstadter wrote:
I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts. It seems to me that the relative performance of monadic implementations must be equivalant to imperative ones to provide a strong case for functional programming. For example, in the Exception handling monad it seems that the "bind" function must be called continously to pass the Error value back to an error handling function. However, in imperative languages we can make an immediate non-local transfer of control. Similiar is the way State is handled. Do compilers for haskell do any sort of optimization on these monadic operations or is it all as ineffecient as it looks.
I agree with others who mentioned that viewing monads as simply providing a way to sequentialize things or to program imperatively is the wrong way to look at them. Remember that an equally satisfactory syntax for the "do" notation is "monad comprehensions", which have less of a sequential feel. And there are many instances of monads that have nothing to do with conventional imperative programming. That said, the EFFICIENCY of monads is often poorly understood. To state the obvious, just because you program something in a monad doesn't make it efficient. In particular, a state monad does not guarantee that the state will actually be updated "in place". The monad only hides the state, and creates an OPPORTUNITY for it to be updated in place. But, for example, if you add an operation that returns the entire state as a value, then the "single-threadedness" property is in fact lost, and in-place update becomes as problematical as it always is. This is an issue that I and my student (Chih-Ping Chen) studied a few years ago, culminating in his PhD thesis and the paper: Chih-Ping Chen and Paul Hudak, "Rolling Your Own Mutable ADT -- A Connection between Linear Types and Monads", Proceedings of 24th ACM Symposium on Principles of Programming Languages, ACM Press, 1997, pp. 54-66. the abstract of which is: A methodology is described whereby a linear ADT may be rigorously encapsulated within a state monad. A CPS-like translation from the original ADT axioms into monadic ones is also described and proven correct, so that reasoning can be accomplished at the monadic level without exposing the state. The ADT axioms are suitably constrained by a linear type system to make this translation possible. This constraint also allows the state to be ``updated in place,'' a notion made precise via a graph-rewrite operational semantics. Best regards, -Paul Hudak

I agree with others who mentioned that viewing monads as simply providing a way to sequentialize things or to program imperatively is the wrong way to look at them. <snip> Yes, Lists are the classical example.
That said, the EFFICIENCY of monads is often poorly understood. To state the obvious, just because you program something in a monad doesn't make it efficient. In particular, a state monad does not guarantee that the state will actually be updated "in place". The monad only hides the state, and creates an OPPORTUNITY for it to be updated in place. But, for example, if you add an operation that returns the entire state as a value, then the "single-threadedness" property is in fact lost, and in-place update becomes as problematical as it always is. Well... I'm having *lots* of problems here. :-)
I've felt need to use State monads in two distinct situations: (A) Keep track of data, Example: - Genetic Algorithms how many crossovers, mutations, etc best (or average) individual fitness in each generation some schemata evolution etc... - Neural Networks Weights evolution Error evolution etc... (B) Acessing state There are some situation where you'll probably rather hide some implementation details in some state monad instead of passing around lots of values... Example: - Genetic Algorithms: Getting Rand. Numbers (instead of passing around all the StdGens you need) etc... And I've seen two distict aproaches. Using a State monad like one provided with GHC (1), or a state monad like the one defined in the paper "Moands for the Working Haskell Programmer" (2). In (1), I can see how I can take advantage of the monad and make updates in place using referencies. But then you'll have to either: - pass the referencies as a parameter to the functions where you use them (needless too say that this defeats the (my) main purpose of using a state monad which is in case A, keep track of data with minimal signature change ) - encapsulate all the functions in the monad. I don't really like this solution... you won't be even able to test the functions in the interpreter... - using implicit referencies (?) Haven't thought that much about this one yet In (2), I can't really see how to make updates in place... I still didn't figure out how to make my own State monad and use updates in place. What I did was - Define my state as some record (datatype with field labels) - Create functions to get and update state... This is doomed to fail! Even if I only want to update the state (like adding values to lists of values or incrementing some counters), I'll have to access that same state. So I'll have to make my own getStateField functions and updateStateField functions for every single field. Using datatypes with field labels makes it easy to define functions to get State fields (you get the would state then use the projection function)... but to update some field state it gets more complicated, if you got 6 counters, you'll have to write similar code 6 times... Damn, I seem to be waisting more time and writing more lines of code than if I was doing it in Pascal!! All just to keep track of some data, it is not even really 'part of the algorithm', I just want to evaluate how it is behaving.... Efficiency wise it seems pretty bad, no updates in place, and I'm expecting lots of stack overflows unless I take special care dealing with laziness. My problem with this is that I'm trying to do something trivial here..., forget about (B) (accessing state), I just want to keep track of data, any newbie can do that in the imperative world, and it seems like some pretty advanced stuff in here!! Programmers need to do this all the time, there should be some simple, well documented, way to do it. I don't want to be a monads expert to be able to do this kind of thing... (and I'm the kind of guy that wants to, eventually, become an monad expert... just not now... imagine how some other guy that has no interest in monads feels when he has the same problem.) IMO (and maybe I'm wrong... this is just the way I feel right now) either we are missing some simple aproach here, or at least we do not have at all the adequate documentation about it, at all. Another state issue, sometimes I have some values that I want to keep constant through the whole algorithm. Example: (some very simple NNs for instance) - learning rate - activation function - maximum number of steps - minimum error required So I'll just declare them as 'constants'. But what if I decide I want the user to be able to chose? Now I got two options: - pass them around as values to all the functions - And signatures get HUGE - just pass them to a higher level function that will encapsulate the functions that use them... which is ugly and complicates everything because you can't test the lower level functions in the interpreter. Am I missing something here or is this really the best you can do?
This is an issue that I and my student (Chih-Ping Chen) studied a few years ago, culminating in his PhD thesis and the paper:
Chih-Ping Chen and Paul Hudak, "Rolling Your Own Mutable ADT -- A Connection between Linear Types and Monads", Proceedings of 24th ACM Symposium on Principles of Programming Languages, ACM Press, 1997, pp. 54-66.
Interesting got to check it out... my main problem is that *I don't have the time* right now... sometimes you just want to get the job done. J.A.

Artie Gold writes:
One way to think of it is to look at a program as a partially ordered set of calculations; some calculations need to occur before others, other groups can occur in any order. In an imperative language you specify a total ordering (which is overkill).
This is a weak argument. First of all it is not the case that imperative coders always specify a total ordering: multitasking, threading and interrupts (and their projections into software as in Unix signals and asynchronous exceptions) are ways of specifying partial ordering when a total ordering would lose. The potential execution speedups of the partial ordering are probably quite real, emphasis of potential, and people having been trying to exploit it for decades now, yet the fast languages are still the imperative ones and the impure functional ones because they have the best fit to modern hardware designs. There is so much wealth invested in these modern cpus designs that it would cost 100s of billions of dollars to create competitors best fitted to pure functional languages. Hint: you need to write a new OS design to go with your new hardware design, then attract users and developers to the OS. And nobody's going to invest that dough there because no single organization stands to gain anywhere near what it would cost. Note that modern cpu designs use "out-of-order" execution strategies; so on micro-second timescales they ignore the total ordering when doing so suits them and when it preserves semantics. There's a case for FP, but parallel execution opportunities is not it. The case for FP is superior abstracting ability. See John Hughes, Why Functional Programming Matters. I guess you could view not having to specify instruction ordering when you do not need to as a sort of abstracting, but I have yet to see code that profits from that kind of abstraction. If that's your argument, show me code examples; that's what Hughes did in his paper. No grudge against you, Artie Gold. This is pent-up frustration at many posters who've made the same argument
"May you keep turning the pages. And may the book never end."
You, too. -- Richard Uhtenwoldt

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi Richard, On Tuesday 19 February 2002 06:57, Richard Uhtenwoldt wrote:
This is a weak argument.
First of all it is not the case that imperative coders always specify a total ordering: multitasking, threading and interrupts (and their projections into software as in Unix signals and asynchronous exceptions) are ways of specifying partial ordering when a total ordering would lose.
Note that modern cpu designs use "out-of-order" execution strategies; so on micro-second timescales they ignore the total ordering when doing so suits them and when it preserves semantics.
I might add to the above points the obvious. Imperative programming languages
in general do _not_ specify a total ordering. Each statement _can_ have a
side effect[*] and the compiler is free to rearrange code such that it runs
faster and preserves semantics. As written in any compiler textbook from
1970's.
Sincerely,
[*] The compiler will know when this is possible, and when the statement is
free of side effects. You can specify that no side effects occur in C++ for
instance. And the compilers also know that certain expressions are always
free of side effects, etc.
- --
Eray Ozkural (exa)
participants (12)
-
Adrian Hey
-
Artie Gold
-
Eray Ozkural
-
Eray Ozkural (exa)
-
Eric Allen Wohlstadter
-
Hamilton Richards
-
Jon Fairbairn
-
Jorge Adriano
-
Manuel M. T. Chakravarty
-
Paul Hudak
-
Richard Uhtenwoldt
-
William Lee Irwin III