
On Tue, Oct 12, 2010 at 8:56 AM, Uwe Schmidt
No, but there is no point in using a formalism that adds complexity without adding functionality. Arrows are more awkward to use than monads because they were intentionally designed to be less powerful than monads in order to cover situations in which one could not use a monad. When your problem is solved by a monad there is no point in using arrows since an arrow require you to jump through extra hoops to accomplish the same goal.
As I understood, John Hughes invented the arrows as a generalisation of monads, you say it's a less powerful concept. I'm a bit puzzled with that. Could you explain these different views.
These are the same thing, the difference is whether you're talking about how many different concepts are compatible with an abstract structure as opposed to what can be done universally with such a structure. Adding the ability to do more things with a structure necessarily reduces the number of concepts that structure applies to. Perhaps a more familiar example is the relationship Functor > Applicative > Monad. Going left to right adds power, making generic code more expressive but reducing the number of concepts that can be represented as instances; going right to left adds generality, limiting what generic code can do but enabling more instances. That said, I dislike calling arrows a generalization of monads--it's not incorrect as such, but I don't think it aids understanding. It really is much better to think of them as generalized functions, which they explicitly are if you look at the borrowed category theory terminology being used. They're generalized monads only in the sense that functions (a -> m b) form arrows in a category, as far as I can tell.
No, that is not at all the problem with arrows. The problem with arrows is that they are more restrictive than monads in two respects. First, unlike monads, in general they do not let you perform an arbitrary action in response to an input. ...
It's rather easy to define some choice combinators. Or am I missing the point?
The key point is that arrows in full generality--meaning instances of Arrow only, not other type classes--are not higher-order because no internal application operator is provided. The ArrowApply class gives you full higher-order generalized functions, at the cost of giving up some useful limitations (read: static guarantees about code behavior) that make reasoning about arrow-based structures potentially easier. So, a general arrow can perform different actions and produce different output based on input it receives, but it can't take *other arrows* and pick different ones to use depending on its input.
The combinator does the following: The input of the whole arrow is fed into g, g computes some result and this result together with the input is used for evaluating f'. The ($<) is something similar to ($).
There's no shortage of ways to deal with the issue, but they all rely on using combinator *functions*, not arrows. The result is that arrow-based expressions tend to be internally less flexible, following pre-defined paths, similar to how expressions using Applicative can't embed control flow the way ones using Monad can. Which is fine for many purposes, of course. Essentially, arrows lend themselves best to composing first-order computations to create larger computations with a fixed structure. If you find yourself forced to frequently use ArrowApply or other means of eliminating higher-order structure--e.g., anything that results in an arrow with an output type that contains fewer instances of the arrow's own type constructor than does the input type--it may be worth considering if arrows are really what you want to use. Personally, though, I think monads are really overkill in many cases and strongly prefer, where possible, to use Applicative or Arrow. - C.