
Eric Allen Wohlstadter
I see a lot of literature that says that monads "simulate" the effects of imperative programming concepts. It seems to me that the relative performance of monadic implementations must be equivalant to imperative ones to provide a strong case for functional programming. For example, in the Exception handling monad it seems that the "bind" function must be called continously to pass the Error value back to an error handling function. However, in imperative languages we can make an immediate non-local transfer of control. Similiar is the way State is handled. Do compilers for haskell do any sort of optimization on these monadic operations or is it all as ineffecient as it looks.
Monads such as the IO monad are entirely compiled away by an optimising Haskell compiler. So, for example, it would use techniques similar to that of a compiler for an imperative language to actually implement exceptions. You can regard the monad as a functional interface to the imperative functionality and the "simulation-based" implementation as a specification of it's semantics, but the generated code is free to use whatever optimisation are possible as long as the same semantics is realised. Cheers, Manuel PS: These optimisations will usually only apply to monads that are defined as part of the system libraries of a Haskell system, not to user-defined ones (unless a user uses non-standard system features to implement the monad).