
On 11/12/2011 07:34 AM, Bas van Dijk wrote:
Are you going to release a new version of monad-control right away
Not just yet. I've split `monad-control` into two packages:
* `monad-control`: just exports `Control.Monad.Trans.Control`. This part is finished. * `lifted-base`: wraps all modules of the `base` package which export `IO` computations and provides lifted version instead. For example we have `Control.Exception.Lifted`, `Control.Concurrent.Lifted`, etc.
As you can imagine the latter is a lot of boring work. Fortunately it's easy to do so will probably not take a lot of time. BTW if by any chance you want to help out, that will be much appreciated!
The repos can be found [here](https://github.com/basvandijk/lifted-base)
Maybe I should elaborate on why I stopped using monad-control and rolled out my own version of lifted Control.Exception in monad-abort-fd package. I'm CC-ing the Cafe just in case someone else might be interested in the matter of IO lifting. Imagine we have a monad for multiprogramming with shared state: -- ContT with a little twist. Both the constructor and runAIO -- are NOT exported. newtype AIO s α = AIO { runAIO ∷ ∀ β . (α → IO (Trace s β)) → IO (Trace s β) } runAIOs ∷ MonadBase IO μ ⇒ s -- The initial state → [AIO s α] -- The batch of programs to run. -- If one program exits normally (without using -- aioExit) or throws an exception, the whole batch -- is terminated. → μ (s, Maybe (Attempt α)) -- The final state and the result. -- "Nothing" means deadlock or that -- all the programs exited with -- aioExit. runAIOs = liftBase $ mask_ $ ... bloody evaluation ... data Trace s α where -- Finish the program (without finishing the batch). TraceExit ∷ Trace s α -- Lift a pure value. TraceDone ∷ α → Trace s α -- A primitive to execute and the continuation. TraceCont ∷ Prim s α → (α → IO (Trace s β)) → Trace s β -- Primitive operations data Prim s α where -- State retrieval/modification PrimGet ∷ Prim s s PrimSet ∷ s → Prim s () -- Scheduling point. The program is suspended until -- the specified event occurs. PrimEv ∷ Event e ⇒ e → Prim s (EventResult e) -- Scheduling point. The program is suspended until the state -- satisfies the predicate. PrimCond ∷ (s → Bool) → Prim s () -- Run computation guarded with finalizer. PrimFin ∷ IO (Trace s α) → (Maybe α → AIO s β) → Prim s (α, β) -- Run computation guarded with exception handler. PrimHand ∷ IO (Trace s α) → (SomeException → AIO s α) → Prim s α aioExit ∷ AIO s α aioExit = AIO $ const $ return TraceExit aioAfter ∷ (s → Bool) → AIO s () aioAfter cond = AIO $ return . TraceCont (PrimCond cond) aioAwait ∷ Event e ⇒ e → AIO s (EventResult e) aioAwait e = AIO $ return . TraceCont (PrimEv e) runAIOs slices the programs at scheduling points and enqueues the individual pieces for execution, taking care of saving/restoring the context (finalizers and exception handlers). The Functor/Applicative/Monad/MonadBase/etc instances are pretty trivial: instance Functor (AIO s) where fmap f (AIO g) = AIO $ \c → g (c . f) instance Applicative (AIO s) where pure a = AIO ($ a) (<*>) = ap instance Monad (AIO s) where return = pure AIO g >>= f = AIO $ \c → g (\a → runAIO (f a) c) instance MonadBase IO (AIO s) where liftBase io = AIO (io >>=) instance MonadState s (AIO s) where get = AIO $ return . TraceCont PrimGet put s = AIO $ return . TraceCont (PrimSet s) instance MonadAbort SomeException (AIO s) where abort = liftBase . throwIO trace ∷ AIO s α → IO (Trace s α) trace (AIO g) = g (return . TraceDone) instance MonadRecover SomeException (AIO s) where recover m h = AIO $ return . TraceCont (PrimHand (trace m) h) instance MonadFinally (AIO s) where finally' m f = AIO $ return . TraceCont (PrimFin (trace m) f) -- finally m = fmap fst . finally' m . const -- No async exceptions in AIO instance MonadMask () (AIO s) where getMaskingState = return () setMaskingState = const id Now we have a problem: we can throw/catch exceptions and install finalizers in AIO, but we can't use Control.Exception.Lifted because we can't declare a MonadBaseControl instance for our "ContT with limited interface". So now, instead of trying to wrap IO control operations uniformly, I just reimplement them in terms of the classes I mentioned above (MonadAbort+MonadRecover+MonadFinally+MonadMask), for example: bracket ∷ (MonadFinally μ, MonadMask m μ) ⇒ μ α → (α → μ β) → (α → μ γ) → μ γ bracket acq release m = mask $ \restore → do a ← acq finally (restore $ m a) (release a) It requires more typing (more classes => more instances), but it works for a broader class of monads and I get proper side affects in finalizers for a bonus. The code is on Hackage (monad-abort-fd) and on the GitHub (https://github.com/mvv/monad-abort-fd/blob/master/src/Control/Monad/Exceptio...)

Hi Mikhail,
your type class:
class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where
recover ∷ μ α → (e → μ α) → μ α
looks a lot like the MonadCatchIO type class from MonadCatchIO-transformers:
class MonadIO m => MonadCatchIO m where
catch :: E.Exception e => m a -> (e -> m a) -> m a
I haven't looked at your code in detail but are you sure your
continuation based AIO monad doesn't suffer from the same unexpected
behavior as the ContT monad transformer with regard to catching and
handling exceptions? The API docs of MonadCatchIO-transformers explain
the bug in detail:
http://hackage.haskell.org/packages/archive/MonadCatchIO-transformers/0.2.2....
Regards,
Bas
On 12 November 2011 13:55, Mikhail Vorozhtsov
On 11/12/2011 07:34 AM, Bas van Dijk wrote:
Are you going to release a new version of monad-control right away
Not just yet. I've split `monad-control` into two packages:
* `monad-control`: just exports `Control.Monad.Trans.Control`. This part is finished. * `lifted-base`: wraps all modules of the `base` package which export `IO` computations and provides lifted version instead. For example we have `Control.Exception.Lifted`, `Control.Concurrent.Lifted`, etc.
As you can imagine the latter is a lot of boring work. Fortunately it's easy to do so will probably not take a lot of time. BTW if by any chance you want to help out, that will be much appreciated!
The repos can be found [here](https://github.com/basvandijk/lifted-base)
Maybe I should elaborate on why I stopped using monad-control and rolled out my own version of lifted Control.Exception in monad-abort-fd package. I'm CC-ing the Cafe just in case someone else might be interested in the matter of IO lifting.
Imagine we have a monad for multiprogramming with shared state:
-- ContT with a little twist. Both the constructor and runAIO -- are NOT exported. newtype AIO s α = AIO { runAIO ∷ ∀ β . (α → IO (Trace s β)) → IO (Trace s β) }
runAIOs ∷ MonadBase IO μ ⇒ s -- The initial state → [AIO s α] -- The batch of programs to run. -- If one program exits normally (without using -- aioExit) or throws an exception, the whole batch -- is terminated. → μ (s, Maybe (Attempt α)) -- The final state and the result. -- "Nothing" means deadlock or that -- all the programs exited with -- aioExit. runAIOs = liftBase $ mask_ $ ... bloody evaluation ...
data Trace s α where -- Finish the program (without finishing the batch). TraceExit ∷ Trace s α -- Lift a pure value. TraceDone ∷ α → Trace s α -- A primitive to execute and the continuation. TraceCont ∷ Prim s α → (α → IO (Trace s β)) → Trace s β
-- Primitive operations data Prim s α where -- State retrieval/modification PrimGet ∷ Prim s s PrimSet ∷ s → Prim s () -- Scheduling point. The program is suspended until -- the specified event occurs. PrimEv ∷ Event e ⇒ e → Prim s (EventResult e) -- Scheduling point. The program is suspended until the state -- satisfies the predicate. PrimCond ∷ (s → Bool) → Prim s () -- Run computation guarded with finalizer. PrimFin ∷ IO (Trace s α) → (Maybe α → AIO s β) → Prim s (α, β) -- Run computation guarded with exception handler. PrimHand ∷ IO (Trace s α) → (SomeException → AIO s α) → Prim s α
aioExit ∷ AIO s α aioExit = AIO $ const $ return TraceExit
aioAfter ∷ (s → Bool) → AIO s () aioAfter cond = AIO $ return . TraceCont (PrimCond cond)
aioAwait ∷ Event e ⇒ e → AIO s (EventResult e) aioAwait e = AIO $ return . TraceCont (PrimEv e)
runAIOs slices the programs at scheduling points and enqueues the individual pieces for execution, taking care of saving/restoring the context (finalizers and exception handlers).
The Functor/Applicative/Monad/MonadBase/etc instances are pretty trivial:
instance Functor (AIO s) where fmap f (AIO g) = AIO $ \c → g (c . f)
instance Applicative (AIO s) where pure a = AIO ($ a) (<*>) = ap
instance Monad (AIO s) where return = pure AIO g >>= f = AIO $ \c → g (\a → runAIO (f a) c)
instance MonadBase IO (AIO s) where liftBase io = AIO (io >>=)
instance MonadState s (AIO s) where get = AIO $ return . TraceCont PrimGet put s = AIO $ return . TraceCont (PrimSet s)
instance MonadAbort SomeException (AIO s) where abort = liftBase . throwIO
trace ∷ AIO s α → IO (Trace s α) trace (AIO g) = g (return . TraceDone)
instance MonadRecover SomeException (AIO s) where recover m h = AIO $ return . TraceCont (PrimHand (trace m) h)
instance MonadFinally (AIO s) where finally' m f = AIO $ return . TraceCont (PrimFin (trace m) f) -- finally m = fmap fst . finally' m . const
-- No async exceptions in AIO instance MonadMask () (AIO s) where getMaskingState = return () setMaskingState = const id
Now we have a problem: we can throw/catch exceptions and install finalizers in AIO, but we can't use Control.Exception.Lifted because we can't declare a MonadBaseControl instance for our "ContT with limited interface".
So now, instead of trying to wrap IO control operations uniformly, I just reimplement them in terms of the classes I mentioned above (MonadAbort+MonadRecover+MonadFinally+MonadMask), for example:
bracket ∷ (MonadFinally μ, MonadMask m μ) ⇒ μ α → (α → μ β) → (α → μ γ) → μ γ bracket acq release m = mask $ \restore → do a ← acq finally (restore $ m a) (release a)
It requires more typing (more classes => more instances), but it works for a broader class of monads and I get proper side affects in finalizers for a bonus.
The code is on Hackage (monad-abort-fd) and on the GitHub (https://github.com/mvv/monad-abort-fd/blob/master/src/Control/Monad/Exceptio...)

Hi Mikhail,
your type class:
class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where recover ∷ μ α → (e → μ α) → μ α
looks a lot like the MonadCatchIO type class from MonadCatchIO-transformers:
class MonadIO m => MonadCatchIO m where catch :: E.Exception e => m a -> (e -> m a) -> m a
I haven't looked at your code in detail but are you sure your continuation based AIO monad doesn't suffer from the same unexpected behavior as the ContT monad transformer with regard to catching and handling exceptions? Yes, I'm sure. The reason why it works is because finally/bracket/etc are not implemented on top of 'recover' (i.e. they don't assume that
On 11/14/2011 06:55 AM, Bas van Dijk wrote: throwing an exception is the only reason control can escape). The following class takes care of it: class (Applicative μ, Monad μ) ⇒ MonadFinally μ where finally' ∷ μ α → (Maybe α → μ β) → μ (α, β) finally ∷ μ α → μ β → μ α finally m = fmap fst . finally' m . const Finalizers have type 'Maybe α → μ β' so we can (a) Thread transformer side effects properly: instance MonadFinally μ ⇒ MonadFinally (L.StateT s μ) where finally' m f = L.StateT $ \s → do ~(~(mr, _), ~(fr, s'')) ← finally' (L.runStateT m s) $ \mbr → do let ~(a, s') = case mbr of Just ~(x, t) → (Just x, t) Nothing → (Nothing, s) L.runStateT (f a) s' return ((mr, fr), s'') (b) Detect that control escaped computation before producing a result (finalizer will be called with 'Nothing' in that case). instance (MonadFinally μ, Error e) ⇒ MonadFinally (ErrorT e μ) where finally' m f = ErrorT $ do ~(mr, fr) ← finally' (runErrorT m) $ \mbr → runErrorT $ f $ case mbr of Just (Right a) → Just a _ → Nothing return $ (,) <$> mr <*> fr That of course does not mean that I can use 'finally' and friends with ContT, but I can use them with monads which are carefully /implemented/ on top of ContT but do not expose it's full power to the users.

Hello Mikhail, (Apologies for reviving a two month old thread). Have you put some thought into whether or not these extra classes generalize in a way that is not /quite/ as general as MonadBaseControl (so as to give you the power you need) but still allow you to implement the functionality you are looking for? I'm not sure but it seems something along the lines of unwind-protect ala Scheme might be sufficient. Edward Excerpts from Mikhail Vorozhtsov's message of Mon Nov 14 01:25:34 -0500 2011:
Hi Mikhail,
your type class:
class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where recover ∷ μ α → (e → μ α) → μ α
looks a lot like the MonadCatchIO type class from MonadCatchIO-transformers:
class MonadIO m => MonadCatchIO m where catch :: E.Exception e => m a -> (e -> m a) -> m a
I haven't looked at your code in detail but are you sure your continuation based AIO monad doesn't suffer from the same unexpected behavior as the ContT monad transformer with regard to catching and handling exceptions? Yes, I'm sure. The reason why it works is because finally/bracket/etc are not implemented on top of 'recover' (i.e. they don't assume that
On 11/14/2011 06:55 AM, Bas van Dijk wrote: throwing an exception is the only reason control can escape). The following class takes care of it:
class (Applicative μ, Monad μ) ⇒ MonadFinally μ where finally' ∷ μ α → (Maybe α → μ β) → μ (α, β) finally ∷ μ α → μ β → μ α finally m = fmap fst . finally' m . const
Finalizers have type 'Maybe α → μ β' so we can
(a) Thread transformer side effects properly:
instance MonadFinally μ ⇒ MonadFinally (L.StateT s μ) where finally' m f = L.StateT $ \s → do ~(~(mr, _), ~(fr, s'')) ← finally' (L.runStateT m s) $ \mbr → do let ~(a, s') = case mbr of Just ~(x, t) → (Just x, t) Nothing → (Nothing, s) L.runStateT (f a) s' return ((mr, fr), s'')
(b) Detect that control escaped computation before producing a result (finalizer will be called with 'Nothing' in that case).
instance (MonadFinally μ, Error e) ⇒ MonadFinally (ErrorT e μ) where finally' m f = ErrorT $ do ~(mr, fr) ← finally' (runErrorT m) $ \mbr → runErrorT $ f $ case mbr of Just (Right a) → Just a _ → Nothing return $ (,) <$> mr <*> fr
That of course does not mean that I can use 'finally' and friends with ContT, but I can use them with monads which are carefully /implemented/ on top of ContT but do not expose it's full power to the users.

Hello Mikhail, Hi.
(Apologies for reviving a two month old thread). Have you put some thought into whether or not these extra classes generalize in a way that is not /quite/ as general as MonadBaseControl (so as to give you the power you need) but still allow you to implement the functionality you are looking for? I'm not sure but it seems something along the lines of unwind-protect ala Scheme might be sufficient. I'm not sure I'm following you. The problem with MonadBaseControl is
On 01/10/2012 12:17 AM, Edward Z. Yang wrote: that it is /not/ general enough. It assumes that you can eject/inject all the stacked effects as a value of some data type. Which works fine for the standard transformers because they are /implemented/ this way. But not for monads that are implemented in operational style, as interpreters, because the interpreter state cannot be internalized. This particular implementation bias causes additional issues when the lifted operation is not fully suited for ejecting/injecting. For example the `Control.Exception.finally` (or unwind-protect), where we can neither inject (at least properly) the effects into nor eject them from the finalizer. That's why I think that the whole "lift operations from the bottom" approach is wrong (the original goal was to lift `Control.Exception`). The right way would be to capture the control semantics of IO as a set of type classes[1] and then implement the general versions of the operations you want to lift. That's what I tried to do with the monad-abord-fd package. Hopefully this makes sense to you. [1] Which turn out to be quite general: MonadAbort/Recover/Finally are just a twist of MonadZero/MonadPlus; MonadMask is expectedly more specific, but permits a nice no-op implementation.

Excerpts from Mikhail Vorozhtsov's message of Tue Jan 10 09:54:38 -0500 2012:
Hello Mikhail, Hi.
(Apologies for reviving a two month old thread). Have you put some thought into whether or not these extra classes generalize in a way that is not /quite/ as general as MonadBaseControl (so as to give you the power you need) but still allow you to implement the functionality you are looking for? I'm not sure but it seems something along the lines of unwind-protect ala Scheme might be sufficient. I'm not sure I'm following you. The problem with MonadBaseControl is
On 01/10/2012 12:17 AM, Edward Z. Yang wrote: that it is /not/ general enough.
Sorry, I mispoke. The sense you are using it is "the more general a type class is, the more instances you can write for it." I think the design goal I'm going for here is, "a single signature which covers MonadAbort/Recover/Finally in a way that unifies them." Which is not more general, except in the sense that it "contains" more type classes (certainly not general in the mathematical sense.)
It assumes that you can eject/inject all the stacked effects as a value of some data type. Which works fine for the standard transformers because they are /implemented/ this way. But not for monads that are implemented in operational style, as interpreters, because the interpreter state cannot be internalized. This particular implementation bias causes additional issues when the lifted operation is not fully suited for ejecting/injecting. For example the `Control.Exception.finally` (or unwind-protect), where we can neither inject (at least properly) the effects into nor eject them from the finalizer. That's why I think that the whole "lift operations from the bottom" approach is wrong (the original goal was to lift `Control.Exception`). The right way would be to capture the control semantics of IO as a set of type classes[1] and then implement the general versions of the operations you want to lift. That's what I tried to do with the monad-abord-fd package.
I think this is generally a useful goal, since it helps define the semantics of IO more sharply. However, the exceptions mechanism is actually fairly well specified, as far as semantics go, see "A Semantics for Imprecise Exceptions" and "Asynchronous Exceptions in Haskell." So I'm not sure if monad-abort-fd achieves the goal of expressing these interfaces, in typeclass form, as well as allowing users to interoperate cleanly with existing language support for these facilities.
[1] Which turn out to be quite general: MonadAbort/Recover/Finally are just a twist of MonadZero/MonadPlus
Now that's interesting! Is this an equivalence, e.g. MonadZero/MonadPlus imply MonadAbort/Recover/Finally and vice-versa, or do you need to make some slight modifications? It seems that you somehow need support for multiple zeros of the monad, as well as a way of looking at them.
MonadMask is expectedly more specific, but permits a nice no-op implementation.
(See my earlier comments about asynchronous exceptions.) Cheers, Edward

Excerpts from Mikhail Vorozhtsov's message of Tue Jan 10 09:54:38 -0500 2012:
Hello Mikhail, Hi.
(Apologies for reviving a two month old thread). Have you put some thought into whether or not these extra classes generalize in a way that is not /quite/ as general as MonadBaseControl (so as to give you the power you need) but still allow you to implement the functionality you are looking for? I'm not sure but it seems something along the lines of unwind-protect ala Scheme might be sufficient. I'm not sure I'm following you. The problem with MonadBaseControl is
On 01/10/2012 12:17 AM, Edward Z. Yang wrote: that it is /not/ general enough.
Sorry, I mispoke. The sense you are using it is "the more general a type class is, the more instances you can write for it." I think the design goal I'm going for here is, "a single signature which covers MonadAbort/Recover/Finally in a way that unifies them." Which is not more general, except in the sense that it "contains" more type classes (certainly not general in the mathematical sense.) Hm, MonadAbort/Recover/Finally are independent (I made MonadAbort a superclass of MonadRecover purely for reasons of convenience). It's easy to imagine monads that have an instance of one of the classes but not of
On 01/10/2012 11:12 PM, Edward Z. Yang wrote: the others.
It assumes that you can eject/inject all the stacked effects as a value of some data type. Which works fine for the standard transformers because they are /implemented/ this way. But not for monads that are implemented in operational style, as interpreters, because the interpreter state cannot be internalized. This particular implementation bias causes additional issues when the lifted operation is not fully suited for ejecting/injecting. For example the `Control.Exception.finally` (or unwind-protect), where we can neither inject (at least properly) the effects into nor eject them from the finalizer. That's why I think that the whole "lift operations from the bottom" approach is wrong (the original goal was to lift `Control.Exception`). The right way would be to capture the control semantics of IO as a set of type classes[1] and then implement the general versions of the operations you want to lift. That's what I tried to do with the monad-abord-fd package.
I think this is generally a useful goal, since it helps define the semantics of IO more sharply. However, the exceptions mechanism is actually fairly well specified, as far as semantics go, see "A Semantics for Imprecise Exceptions" and "Asynchronous Exceptions in Haskell." So I'm not sure if monad-abort-fd achieves the goal of expressing these interfaces, in typeclass form, as well as allowing users to interoperate cleanly with existing language support for these facilities.
I certainly didn't do that in any formal way. I was thinking something like this: if we identify the basic IO-specific control operations, abstract them but make sure they interact in the same way they do in IO, then any derivative IO control operation (implemented on top of the basic ones) could be lifted just by changing the type signature. The key words here are of course "interact in the same way".
[1] Which turn out to be quite general: MonadAbort/Recover/Finally are just a twist of MonadZero/MonadPlus
Now that's interesting! Is this an equivalence, e.g. MonadZero/MonadPlus imply MonadAbort/Recover/Finally and vice-versa, or do you need to make some slight modifications? It seems that you somehow need support for multiple zeros of the monad, as well as a way of looking at them.
Yes, something along those lines. MonadAbort is a generalization of MonadZero, MonadRecover is a specialization of the "left catch" version of MonadPlus (aka MonadOr). MonadFinally is about adopting finally0 m f = do r ← m `morelse` (f Nothing >> mzero) (r, ) <$> f (Just r) to the notion of failure associated with a particular monad.

Hello Mikhail, Sorry, long email. tl;dr I think it makes more sense for throw/catch/mask to be bundled together, and it is the case that splitting these up doesn't address the original issue monad-control was designed to solve. ~ * ~ Anders and I thought a little more about your example, and we first wanted to clarify which instance you thought was impossible to write. For example, we think it should be possible to write: instance MonadBaseControl AIO AIO Notice that the base monad is AIO: this lets you lift arbitrary AIO operations to a transformed AIO monad (e.g. ReaderT r AIO), but no more. If this is the instance you claimed was impossible, we'd like to try implementing it. Can you publish the full example code somewhere? However, we don't think it will be possible to write: instance MonadBaseControl IO AIO Because this lets you leak arbitrary IO control flow into AIO (e.g. forkIO, with both threads having the ability to run the current AIO context), and as you stated, you only want to allow a limited subset of control flow in. (I think this was the intent of the original message.) Maybe client code doesn't want to be attached to AIO base monads, though; that's too restrictive for them. So they'd like to generalize a bit. So let's move on to the issue of your typeclass decomposition. ~ * ~ I don't think it makes too much sense have thing pick off a menu of Abort/Recover/Finally from a semantics perspective:
It's easy to imagine monads that have an instance of one of the classes but not of the others....
I'd like to see some examples. I hypothesize that most of such monads are incoherent, semantically speaking. For example, what does it mean to have a monad that can recover exceptions, but for which you can't throw exceptions? There only a few options: - You have special primitives which throw exceptions, distinct from Haskell's IO exceptions. In that case, you've implemented your own homebrew exception system, and all you get is a 'Catch MyException' which is too specific for a client who is expecting to be able to catch SomeExceptions. - You execute arbitrary IO and allow those exceptions to be caught. But then I can implement Throw: I just embed an IO action that is throwing an exception. - You only execute a limited subset of IO, but when they throw exceptions they throw ordinary IO exceptions. In this case, the client doesn't have access to any scarce resources except the ones you provided, so there's no reason for him to even need this functionality, unless he's specifically coding against your monad. What does it mean to not have a Finally instance, but a Recover and Throw instance? Well, I can manually reimplement finally in this case (with or without support for asynchronous exceptions, depending on whether or not Mask is available): this is how the paper does it (finally is not a primitive.) What does it mean to have a monad that can throw exceptions, but not catch them? This is any of the usual monads that can fail, of which we have many. And of course, you can't allow this in the presence of scarce resources since there is no way to properly deallocate them when exceptions are thrown. So it seems this is just ordinary failure which cannot be used in the presence of arbitrary IO. What does it mean to have all of the above, but not to have a mask instance? One approach is to pretend asynchronous exceptions do not exist. As you do in your example, we can simply mask. I think this is a bit to give up, but I'll concede it. However, I don't think it's acceptable not to provide mask functionality, not mask your interpreter, and allow arbitrary IO. It's now impossible to properly implement many patterns without having subtle race conditions. So it seems we should collapse these into one class, which conveniently maps straight to the semantics defined in "Asynchronous Exceptions in Haskell". class MonadAsyncExc m where mask :: ((forall a. m a -> m a) -> m b) -> m b throw :: SomeException -> m () catch :: m a -> (SomeException -> m a) -> m a But you get to have your cake and eat it too: if you define a monad which is guaranteed to be run with asynchronous exceptions masked, you can define the 'mask' function to be a no-op and not violate any laws! Hooray! But this is in fact what MonadCatchIO was, except that MonadCatchIO was formulated when we still had block/unblock and it required MonadIO. So a useful endeavour would be to punt the MonadIO superclass constraint and fix the definitions, and we have something that is usable to your case. ~ * ~ To contextualize this whole discussion, recall the insiduous problem that *motivated* the creation of monad-control. Suppose that we've done all of the hard work and lifted all of the Control.Exception functions to our new formula (maybe we also need uninterruptibleMask in our class, but no big matter.) Now a user comes along and asks for 'alloca :: Storable a => (Ptr a -> IO b) -> IO b'. Crap! We need to redefine alloca to work for our new monad. So in comes the class Alloca. But there are a billion, billion of these functions. If only there was a way of automatically getting lifted versions of them... and this leads us back to MonadMorphIO, monad-peel, and finally monad-control. This gist of your objection to this typeclass is that there is no principled way to tell the difference between: forkIO :: IO a -> IO a and bracketed :: IO a -> IO a Thus, it is impossible, with the type system given to us by IO, to write a function that will lift bracketed automatically, but not also lift forkIO. Long live the IO sin bin. So the best we can hope for is manually lifting every function we think is safe. You've attempted to work around this by defining a typeclass per function, but as I've tried to argue in this email that there is no way to reasonably reason about these functions in isolation. I also hope that I've convinced you that even with all of these typeclasses, we still only have a partial solution. I think one way forward may be to create a mechanism by which we can identify functions which don't alter control flow, and cheaply lift them via a monad-control like mechanism, while not publishing that mechanism itself (except, maybe, as an unsafe combinator). But this is a discussion that independent of your rant. Cheers, Edward

On 01/16/2012 02:15 PM, Edward Z. Yang wrote:
Hello Mikhail, Hi.
Sorry, long email. tl;dr I think it makes more sense for throw/catch/mask to be bundled together, and it is the case that splitting these up doesn't address the original issue monad-control was designed to solve.
~ * ~
Anders and I thought a little more about your example, and we first wanted to clarify which instance you thought was impossible to write.
For example, we think it should be possible to write:
instance MonadBaseControl AIO AIO
Notice that the base monad is AIO: this lets you lift arbitrary AIO operations to a transformed AIO monad (e.g. ReaderT r AIO), but no more. If this is the instance you claimed was impossible, we'd like to try implementing it. Can you publish the full example code somewhere?
However, we don't think it will be possible to write:
instance MonadBaseControl IO AIO
Because this lets you leak arbitrary IO control flow into AIO (e.g. forkIO, with both threads having the ability to run the current AIO context), and as you stated, you only want to allow a limited subset of control flow in. (I think this was the intent of the original message.) I was talking about the latter instance. And I don't want to lift IO control to AIO, I want an API that works with both IO and AIO.
The real problem with `MonadBaseControl IO AIO` is that the interpreter cuts actions into smaller pieces (at blocking operations) and then reschedules them in some order. For example, consider the following piece of code: (`catch` \e → HANDLER) $ do FOO -- wait until the state satisfies the condition aioCond STATE_CONDITION BAR The interpreter installs HANDLER and executes FOO. Then it restores exception handlers of some other program and executes a piece of it. Eventually, when the state satisfies STATE_CONDITION, the interpreter restores HANDLER and executes BAR. It's impossible to implement `catch` by some sort of straightforward delegation to `Control.Exception.catch`, you can't inject you logic into IO (at least without some bizarre inversion of control). I don't see why functions like `throwIO`, `catch`, `finally`, `bracket`, etc should be tied to IO or monads that allow lifting of IO actions. The functions make perfect sense in `ErrorT SomeException Identity` and in many other monads that have nothing to do with IO, why restrict ourselves? It's like exporting custom named `<|>` and friends for each parser combinator library and then reimplementing common Alternative idioms again and again.
Maybe client code doesn't want to be attached to AIO base monads, though; that's too restrictive for them. So they'd like to generalize a bit. So let's move on to the issue of your typeclass decomposition.
~ * ~
I don't think it makes too much sense have thing pick off a menu of Abort/Recover/Finally from a semantics perspective:
It's easy to imagine monads that have an instance of one of the classes but not of the others....
I'd like to see some examples. I hypothesize that most of such monads are incoherent, semantically speaking. For example, what does it mean to have a monad that can recover exceptions, but for which you can't throw exceptions?
There only a few options:
- You have special primitives which throw exceptions, distinct from Haskell's IO exceptions. In that case, you've implemented your own homebrew exception system, and all you get is a 'Catch MyException' which is too specific for a client who is expecting to be able to catch SomeExceptions.
- You execute arbitrary IO and allow those exceptions to be caught. But then I can implement Throw: I just embed an IO action that is throwing an exception.
- You only execute a limited subset of IO, but when they throw exceptions they throw ordinary IO exceptions. In this case, the client doesn't have access to any scarce resources except the ones you provided, so there's no reason for him to even need this functionality, unless he's specifically coding against your monad. As I said, you think of IO too much. The purpose of monad-abort-fd is to
Imagine a monad that disallows lifting of arbitrary IO actions, but can receive asynchronous events (which would probably be /implemented/ on top of asynchronous exceptions, but that's not important here) that behave like runtime-inserted left zeros. COMPUTATIONALLY_HEAVY_CODE `recover` \level → GIVE_AN_APPROXIMATION_INSTEAD(level) provide a generic API for handling errors that have values attached to them and for guarding actions with finalizers (as the notion of failure can include more things besides the errors).
What does it mean to not have a Finally instance, but a Recover and Throw instance? Well, I can manually reimplement finally in this case (with or without support for asynchronous exceptions, depending on whether or not Mask is available): this is how the paper does it (finally is not a primitive.)
No, you can't. MonadFinally instances must (I really should write documentation) handle /all/ possible failures, not just exceptions. The naive finally ∷ MonadRecover e μ ⇒ μ α → μ β → μ α finally m f = do a ← m `recover` \e → f >> abort e void $ f return a wouldn't work in `MaybeT IO`, just consider `finally mzero f`.
What does it mean to have a monad that can throw exceptions, but not catch them? This is any of the usual monads that can fail, of which we have many. And of course, you can't allow this in the presence of scarce resources since there is no way to properly deallocate them when exceptions are thrown. So it seems this is just ordinary failure which cannot be used in the presence of arbitrary IO.
Again, too much IO, it is not relevant.
What does it mean to have all of the above, but not to have a mask instance? One approach is to pretend asynchronous exceptions do not exist. As you do in your example, we can simply mask. I think this is a bit to give up, but I'll concede it. However, I don't think it's acceptable not to provide mask functionality, not mask your interpreter, and allow arbitrary IO. It's now impossible to properly implement many patterns without having subtle race conditions.
In my particular case I feel no need for asynchronous exceptions as I have a concurrency primitive that is used for interrupting: sh ← newOneShot runAIOs s0 [ do aioAwait sh info "Service shutdown requested" , ... ] And of course I do not liftBase blocking IO functions. The whole point of AIO is to reschedule on blocking operations after all.
So it seems we should collapse these into one class, which conveniently maps straight to the semantics defined in "Asynchronous Exceptions in Haskell".
class MonadAsyncExc m where mask :: ((forall a. m a -> m a) -> m b) -> m b throw :: SomeException -> m () catch :: m a -> (SomeException -> m a) -> m a
But you get to have your cake and eat it too: if you define a monad which is guaranteed to be run with asynchronous exceptions masked, you can define the 'mask' function to be a no-op and not violate any laws! Hooray!
But this is in fact what MonadCatchIO was, except that MonadCatchIO was formulated when we still had block/unblock and it required MonadIO. So a useful endeavour would be to punt the MonadIO superclass constraint and fix the definitions, and we have something that is usable to your case.
The problem with MonadCatchIO is that it has no proper `finally`, see my `MaybeT IO` example.
~ * ~
To contextualize this whole discussion, recall the insiduous problem that *motivated* the creation of monad-control. Suppose that we've done all of the hard work and lifted all of the Control.Exception functions to our new formula (maybe we also need uninterruptibleMask in our class, but no big matter.) Now a user comes along and asks for 'alloca :: Storable a => (Ptr a -> IO b) -> IO b'. Crap! We need to redefine alloca to work for our new monad. So in comes the class Alloca. But there are a billion, billion of these functions.
I don't think that's true. There is actually a limited set (induced mainly by primops) of "difficult" functions that require new abstractions. Alloca is a pain only because it implemented as a `IO $ \s → PRIMOP_SPAGHETTI`. I don't know if the spaghetti can be twisted to look something like: .. = mask $ \restore → do mbarr ← liftBase $ IO $ newAlignedPinnedByteArray# size align finally (...) $ do ... restore (action ptr) It depends on the semantics of the primops involved. Fortunately, most of IO control operations can be easily generalized just by changing the type signature: import qualified Control.Concurrent.MVar as MV takeMVar ∷ MonadBase IO μ ⇒ MVar α → μ α takeMVar = liftBase . MV.takeMVar putMVar ∷ MonadBase IO μ ⇒ MVar α → α → μ α putMVar v = liftBase . MV.putMVar v withMVar ∷ MVar α → (α → IO β) → IO β -- withMVar ∷ (MonadBase IO μ, MonadFinally μ, MonadMask m μ) -- ⇒ MVar α → (α → μ β) → μ β -- works too, /without changing the body/! withMVar v f = mask $ \restore → do a ← takeMVar v restore (m a) `finally` putMVar v a This `withMVar` would work as expected in IO, AIO, and transformer stacks on top of them.
This gist of your objection to this typeclass is that there is no principled way to tell the difference between:
forkIO :: IO a -> IO a
and
bracketed :: IO a -> IO a
Thus, it is impossible, with the type system given to us by IO, to write a function that will lift bracketed automatically, but not also lift forkIO. Long live the IO sin bin. So the best we can hope for is manually lifting every function we think is safe. You've attempted to work around this by defining a typeclass per function, but as I've tried to argue in this email that there is no way to reasonably reason about these functions in isolation. I also hope that I've convinced you that even with all of these typeclasses, we still only have a partial solution.
I hope you see that my approach is entirely different. I'm not interested in lifting IO operations we have in `base` by some clever ad-hoc machinery, I want to generalize (there possible) their types. Summary: 1. Exception handling and finalizers are generic concepts that make sense in many monads and therefore should not be tied to IO. 2. Regular IO functions can be generalized just by prefixing them with `liftBase $`. This will make them work in any `MonadBase IO μ`. 3. Most IO control operations can be generalized just by changing their type signatures to use MonadAbort/Recover/Finally/Mask (without changing the body; maybe with just a few `liftBase`s here and there). This will make them work at least in IO, AIO, and transformer stacks on top of them. 4. IO control operations that rely on passing IO actions to a primop (like, presumably, `alloca`) should be generalized the monad-control way (which is OK, I don't see how I can lift it to AIO anyway, even if I try to do it manually). Partial generalizations like `alloca' ∷ (Storable α, MonadBase IO μ) ⇒ (Ptr α → IO β) → μ β` might also be useful.

On Mon, Jan 16, 2012 at 08:17, Mikhail Vorozhtsov < mikhail.vorozhtsov@gmail.com> wrote:
As I said, you think of IO too much.
I think you two are talking about two related but different things; monad-control is solving a *specific* problem with IO, but you want something more general. I think you'll find your more general solution will involve monad-control, but will not satisfy all the use cases for monad-control (because IO is a sewer, and sometimes you need to deal with the sewer as such). -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

Hello Mikhail, Thanks for continuing to be willing to participate in a lively discussion. :-) Excerpts from Mikhail Vorozhtsov's message of Mon Jan 16 08:17:57 -0500 2012:
On 01/16/2012 02:15 PM, Edward Z. Yang wrote:
Anders and I thought a little more about your example, and we first wanted to clarify which instance you thought was impossible to write. [snip] I was talking about the latter instance.
Great, that's what I thought.
And I don't want to lift IO control to AIO, I want an API that works with both IO and AIO.
Yup! But we're going to have to define what me mean by "API that works with both IO and AIO".
The real problem with `MonadBaseControl IO AIO` is that the interpreter cuts actions into smaller pieces (at blocking operations) and then reschedules them in some order. [snip]
I don't what you've said here is inconsistent with me claiming that "AIO needs to limit IO control flow", but I must admit we've been working with an approximation of AIO because the full code hasn't been publishing anywhere.
I don't see why functions like `throwIO`, `catch`, `finally`, `bracket`, etc should be tied to IO or monads that allow lifting of IO actions. The functions make perfect sense in `ErrorT SomeException Identity` and in many other monads that have nothing to do with IO, why restrict ourselves?
I agree. (Later in the message I'll propose a new MonadCatchIO instance that drops the MonadIO superclass.) The first section was intended to make sure I understood what you were talking about. Based on your response, I *think* I interpreted your problem correctly.
I don't think it makes too much sense have thing pick off a menu of Abort/Recover/Finally from a semantics perspective:
It's easy to imagine monads that have an instance of one of the classes but not of the others....
I'd like to see some examples. I hypothesize that most of such monads are incoherent, semantically speaking. For example, what does it mean to have a monad that can recover exceptions, but for which you can't throw exceptions? Imagine a monad that disallows lifting of arbitrary IO actions, but can receive asynchronous events (which would probably be /implemented/ on top of asynchronous exceptions, but that's not important here) that behave like runtime-inserted left zeros.
COMPUTATIONALLY_HEAVY_CODE `recover` \level → GIVE_AN_APPROXIMATION_INSTEAD(level)
The vehicle of implementation here is kind of important. If they are implemented as asynchronous exceptions, I can in fact still throw in this universe: I just attempt to execute the equivalent of 'undefined :: m a'. Since asynchronous exceptions can always be thrown from pure code, I can /always/ do this, no matter how you lock down the types. Indeed, I think implementing this functionality on asynchronous exceptions is a good idea, because it lets you handle nonterminating pure code nicely, and allows you to bail out even when you're not doing monadic execution. But, for the sake of argument, so let's suppose that they're not done as asynchronous exceptions; essentially, you define some 'safe points' which have the possibility to raise exceptions. In this case, I claim there will never be a *technical* difficulty against implementing manually thrown exceptions; the concern here is "you don't want the user to do that." With some sets of operations, this isn't a very strong injunction; if there is a deterministic set of operations that results in an error, the user can make a gadget which is semantically equivalent to a thrown exception. I don't think I can argue anything stronger here, so I concede the rest of the point. So, to summarize, such an interface (has recovery but not masking or throwing) always has a trivial throw instance unless you are not implementing it on top of asynchronous exceptions. Your example reminds me of what happens in pure code. In this context, we have the ability to throw errors and map over errors (although I'm not sure how people feel about that, semantically), but not to catch them or mask them. But I don't think we need another typeclass for that.
There only a few options:
- You have special primitives which throw exceptions, distinct from Haskell's IO exceptions. In that case, you've implemented your own homebrew exception system, and all you get is a 'Catch MyException' which is too specific for a client who is expecting to be able to catch SomeExceptions.
- You execute arbitrary IO and allow those exceptions to be caught. But then I can implement Throw: I just embed an IO action that is throwing an exception.
- You only execute a limited subset of IO, but when they throw exceptions they throw ordinary IO exceptions. In this case, the client doesn't have access to any scarce resources except the ones you provided, so there's no reason for him to even need this functionality, unless he's specifically coding against your monad. As I said, you think of IO too much.
Guilty as charged!
The purpose of monad-abort-fd is to provide a generic API for handling errors that have values attached to them and for guarding actions with finalizers (as the notion of failure can include more things besides the errors).
Here's the reason I'm so fixated on IO: There is a very, /very/ bright line between code that does IO, and pure code. You can have arbitrary stacks of monads, but at the end of the day, if IO is not at the end of the line, your code is pure. If your code is pure, you don't need finalizers. (Indeed, this is the point of pure code...) I can abort computations willy nilly. I can redo them willy nilly. You get a lot of bang for your buck if you're pure. I don't understand what the "too much IO" objection is about. If there is no IO (now, I don't mean a MonadIO instance, but I do mean, in order to interpret the monad), it seems to me that this API is not so useful.
No, you can't. MonadFinally instances must (I really should write documentation) handle /all/ possible failures, not just exceptions. The naive
finally ∷ MonadRecover e μ ⇒ μ α → μ β → μ α finally m f = do a ← m `recover` \e → f >> abort e void $ f return a
wouldn't work in `MaybeT IO`, just consider `finally mzero f`.
I think that's incoherent. To draw out your MaybeT IO example to its logical conclusion, you've just created two types of zeros, only one of which interacts with 'recover' but both of which interact with 'finally'. Down this inconsistency lies madness! Really, we'd like 'recover' to handle Nothing's: and actually we can: introduce a distinguished SomeException value that corresponds to nothings, and setup abort to transform that not into an IO exception but a pure Nothing value. Then 'finally' as written works.
What does it mean to have all of the above, but not to have a mask instance? One approach is to pretend asynchronous exceptions do not exist. As you do in your example, we can simply mask. I think this is a bit to give up, but I'll concede it. However, I don't think it's acceptable not to provide mask functionality, not mask your interpreter, and allow arbitrary IO. It's now impossible to properly implement many patterns without having subtle race conditions. In my particular case I feel no need for asynchronous exceptions as I have a concurrency primitive that is used for interrupting:
sh ← newOneShot runAIOs s0 [ do aioAwait sh info "Service shutdown requested" , ... ]
Sure. And the point here is conceded, and accounted for later.
The problem with MonadCatchIO is that it has no proper `finally`, see my `MaybeT IO` example.
Addressed above.
To contextualize this whole discussion, recall the insiduous problem that *motivated* the creation of monad-control. Suppose that we've done all of the hard work and lifted all of the Control.Exception functions to our new formula (maybe we also need uninterruptibleMask in our class, but no big matter.) Now a user comes along and asks for 'alloca :: Storable a => (Ptr a -> IO b) -> IO b'. Crap! We need to redefine alloca to work for our new monad. So in comes the class Alloca. But there are a billion, billion of these functions. I don't think that's true. There is actually a limited set (induced mainly by primops) of "difficult" functions that require new abstractions. Alloca is a pain only because it implemented as a `IO $ \s → PRIMOP_SPAGHETTI`. I don't know if the spaghetti can be twisted to look something like:
.. = mask $ \restore → do mbarr ← liftBase $ IO $ newAlignedPinnedByteArray# size align finally (...) $ do ... restore (action ptr)
It depends on the semantics of the primops involved. Fortunately, most of IO control operations can be easily generalized just by changing the type signature:
import qualified Control.Concurrent.MVar as MV
takeMVar ∷ MonadBase IO μ ⇒ MVar α → μ α takeMVar = liftBase . MV.takeMVar
putMVar ∷ MonadBase IO μ ⇒ MVar α → α → μ α putMVar v = liftBase . MV.putMVar v
withMVar ∷ MVar α → (α → IO β) → IO β -- withMVar ∷ (MonadBase IO μ, MonadFinally μ, MonadMask m μ) -- ⇒ MVar α → (α → μ β) → μ β -- works too, /without changing the body/! withMVar v f = mask $ \restore → do a ← takeMVar v restore (m a) `finally` putMVar v a
This `withMVar` would work as expected in IO, AIO, and transformer stacks on top of them.
OK, there are several points involved here. First, we might wonder, how many operations fundamentally are resistant to that treatment? Well, we can look at the primop list: catch# raiseIO# maskAsyncExceptions# maskUninterruptible# unmaskAsyncExceptions# atomically# catchRetry# catchSTM# check# fork# forkOn# finalizeWeak# So, using the method you describe, we may be able to get away with thirteen typeclasses. Ok... (Notice, by the way, that finally# is not on this list! So if /this/ was what you were thinking, I was probably thrown off by the fact that you included typeclasses for both primitive functions as well as derived ones.) Second, we might wonder, how tractable is this approach? Certainly, it gives us another principled way of lifting all of the "hard" functions, assuming that all of the primops are taken care of. Of course, there are a lot of objections: - It requires copy pasting code (and if upstream changes their implementation, so must we). I constrast this with the lifted-base method, which, while annoying and repetitive, does not involve copypasted code. - Un-transforming primop'd code undos important performance optimizations But I think there is a very important point to concede here, which is that without language support it may be impossible to implement 'generic' versions of these derived functions from the specialized ones automatically. lifted-base achieves the appearance of automatically lifting, but that's only because directly calling the original implementations is acceptable.
I hope you see that my approach is entirely different. I'm not interested in lifting IO operations we have in `base` by some clever ad-hoc machinery, I want to generalize (there possible) their types.
And the logical conclusion of this is that, not only do you need to create a function for every function you want to generalize, you also need to steal all of the implementations. Which suggests that actually you want to be talking to the GHC team about this endeavour, since the situation is a bit less bad if base is maintaining the generalized versions as well as the specialized ones: the specialized versions can just be inlined versions of the generalized ones.
Summary: 1. Exception handling and finalizers are generic concepts that make sense in many monads and therefore should not be tied to IO.
I disagree, and submit the modification: "Exception handling and finalizers are generic concepts that make sense in many IO-based monads with alternative control flow or semantics, and therefore should not be tied to IO-based monads that must precisely implement IO control flow." Exception handling is well understood for pure code, and finalizers unnecessary.
2. Regular IO functions can be generalized just by prefixing them with `liftBase $`. This will make them work in any `MonadBase IO μ`.
I disagree, and submit the modification: "Regular IO functions can be lifted without respect to control flow by prefixing them with liftBase. This will make them work in any `MonadBase IO mu'." Just because I can lift a function, doesn't mean it's been generalized; in particular, any more primitive functions it calls continue to be the original, and not generalized versions.
3. Most IO control operations can be generalized just by changing their type signatures to use MonadAbort/Recover/Finally/Mask (without changing the body; maybe with just a few `liftBase`s here and there). This will make them work at least in IO, AIO, and transformer stacks on top of them.
I agree, but submit that of the MonadAbort/Recover/Finally/Mask quartet Finally should be dropped and Abort/Recover/Mask unified into one typeclass.
4. IO control operations that rely on passing IO actions to a primop (like, presumably, `alloca`) should be generalized the monad-control way (which is OK, I don't see how I can lift it to AIO anyway, even if I try to do it manually). Partial generalizations like `alloca' ∷ (Storable α, MonadBase IO μ) ⇒ (Ptr α → IO β) → μ β` might also be useful.
I (surprisingly) disagree, and submit that they /can/ be generalized the copy pasting way, and if such a change is coordinated with the base teams, could be the preferred mechanism. Summary: 1. The only known semantics of asynchronous exceptions involves the primitives abort, recover and mask, and this semantics can be converted into one that is synchronous if we supply a no-op definition for mask and require the semantics stay the same. It seems poor for this semantics to grow to include finally or for this semantics to contract to have abort without recover, or recover without abort. But this is not a fundamental point, and while there are lots of different exception handling semantics, it's possible specialized applications could make sense with limited combinators: however, *show me the semantics.* 2. Finalizer handling is not necessary in pure code. 3. A way of cleaning up the IO sin bin would be to generalize appropriate primitive functions over appropriate type classes, and then copy pasting the source for all derived definitions. I submit that doing so as a third party is a bad idea. I submit that we can do this incrementally, and that it's not that bad of an idea if you can convince the GHC team it's a good idea. Exception handling might be a good place to start. (An issue to consider: what about the interaction of orthogonal features?) 4. (3) is the only way of getting an appropriate behavior for models of IO with weird control flow. So, I agree with you, monad-control is no good for AIO, and your essential idea is necessary. (Though, Anders claims that at some point he figured out how to fix the ContT problem for monad-peel; I should poke him again on this point.) Cheers, Edward

I don't think it makes too much sense have thing pick off a menu of Abort/Recover/Finally from a semantics perspective:
It's easy to imagine monads that have an instance of one of the classes but not of the others....
I'd like to see some examples. I hypothesize that most of such monads are incoherent, semantically speaking. For example, what does it mean to have a monad that can recover exceptions, but for which you can't throw exceptions? Imagine a monad that disallows lifting of arbitrary IO actions, but can receive asynchronous events (which would probably be /implemented/ on top of asynchronous exceptions, but that's not important here) that behave like runtime-inserted left zeros.
COMPUTATIONALLY_HEAVY_CODE `recover` \level → GIVE_AN_APPROXIMATION_INSTEAD(level)
The vehicle of implementation here is kind of important. If they are implemented as asynchronous exceptions, I can in fact still throw in this universe: I just attempt to execute the equivalent of 'undefined :: m a'. Since asynchronous exceptions can always be thrown from pure code, I can /always/ do this, no matter how you lock down the types. Indeed, I think implementing this functionality on asynchronous exceptions is a good idea, because it lets you handle nonterminating pure code nicely, and allows you to bail out even when you're not doing monadic execution. I don't like there this is going. Arguments like this destroy the whole
On 01/17/2012 03:00 AM, Edward Z. Yang wrote: [snip] point of having abstract interfaces. I took liftBase from you and now you are picking lock on my back door with raise#. I can deny this by hiding the constructor of the asynchronous exception I use for passing `lavel` in my implementation. But seriously. Next thing I know you will be sneaking down my chimney with `unsafePerformIO` in your hands. It is no question that the type system cannot protect us from all the tricks RTS provides, but we still can rely on conventions of use. Personally I'm not a fan of exceptions in pure code. If something can fail it should be reflected in its type, otherwise I consider it a bug. The only scenario I'm comfortable with is using asynchronous exceptions to interrupt some number crunching.
But, for the sake of argument, so let's suppose that they're not done as asynchronous exceptions; essentially, you define some 'safe points' which have the possibility to raise exceptions. In this case, I claim there will never be a *technical* difficulty against implementing manually thrown exceptions; the concern here is "you don't want the user to do that." With some sets of operations, this isn't a very strong injunction; if there is a deterministic set of operations that results in an error, the user can make a gadget which is semantically equivalent to a thrown exception. I don't think I can argue anything stronger here, so I concede the rest of the point.
So, to summarize, such an interface (has recovery but not masking or throwing) always has a trivial throw instance unless you are not implementing it on top of asynchronous exceptions.
Your example reminds me of what happens in pure code. In this context, we have the ability to throw errors and map over errors (although I'm not sure how people feel about that, semantically), but not to catch them or mask them. But I don't think we need another typeclass for that.
The purpose of monad-abort-fd is to provide a generic API for handling errors that have values attached to them and for guarding actions with finalizers (as the notion of failure can include more things besides the errors).
Here's the reason I'm so fixated on IO: There is a very, /very/ bright line between code that does IO, and pure code. You can have arbitrary stacks of monads, but at the end of the day, if IO is not at the end of the line, your code is pure.
If your code is pure, you don't need finalizers. (Indeed, this is the point of pure code...) I can abort computations willy nilly. I can redo them willy nilly. You get a lot of bang for your buck if you're pure.
I don't understand what the "too much IO" objection is about. If there is no IO (now, I don't mean a MonadIO instance, but I do mean, in order to interpret the monad), it seems to me that this API is not so useful. You are forgetting about `ST`. For example, in `ErrorT SomeException ST` finalizers /do/ make sense. It's not about having IO, it is about having some sort of state(fulness).
No, you can't. MonadFinally instances must (I really should write documentation) handle /all/ possible failures, not just exceptions. The naive
finally ∷ MonadRecover e μ ⇒ μ α → μ β → μ α finally m f = do a ← m `recover` \e → f>> abort e void $ f return a
wouldn't work in `MaybeT IO`, just consider `finally mzero f`.
I think that's incoherent. To draw out your MaybeT IO example to its logical conclusion, you've just created two types of zeros, only one of which interacts with 'recover' but both of which interact with 'finally'. Down this inconsistency lies madness! Really, we'd like 'recover' to handle Nothing's: and actually we can: introduce a distinguished SomeException value that corresponds to nothings, and setup abort to transform that not into an IO exception but a pure Nothing value. Then 'finally' as written works. I see no inconsistency here. I just give implementers an opportunity to decide which failures are recoverable (with `recover`) and which are not, without sacrificing proper resource/state management. You approach rejects unrecoverable failures completely. Back to this particular case. I implemented `MonadRecover e (MaybeT μ)` this way because that's how I usually use `MaybeT IO`: `catch` for exceptions, `mplus` for `mzero`s. BTW that is also how STM works: `catchSTM` for exceptions, `orElse` for `retry`s. Ideally we should have different ways of recovering from different kinds of failures (and some kinds should not be allowed to be "thrown" by client code) in our abstract setting too. But I don't think
Hm, are you against splitting MonadPlus too? [snip] that's easily expressible in the type system we have today (at least without confusing type inference). Injecting failures into exception hierarchy is too value-level for me.
What does it mean to have all of the above, but not to have a mask instance? One approach is to pretend asynchronous exceptions do not exist. As you do in your example, we can simply mask. I think this is a bit to give up, but I'll concede it. However, I don't think it's acceptable not to provide mask functionality, not mask your interpreter, and allow arbitrary IO. It's now impossible to properly implement many patterns without having subtle race conditions. In my particular case I feel no need for asynchronous exceptions as I have a concurrency primitive that is used for interrupting:
sh ← newOneShot runAIOs s0 [ do aioAwait sh info "Service shutdown requested" , ... ]
Sure. And the point here is conceded, and accounted for later.
The problem with MonadCatchIO is that it has no proper `finally`, see my `MaybeT IO` example.
Addressed above.
To contextualize this whole discussion, recall the insiduous problem that *motivated* the creation of monad-control. Suppose that we've done all of the hard work and lifted all of the Control.Exception functions to our new formula (maybe we also need uninterruptibleMask in our class, but no big matter.) Now a user comes along and asks for 'alloca :: Storable a => (Ptr a -> IO b) -> IO b'. Crap! We need to redefine alloca to work for our new monad. So in comes the class Alloca. But there are a billion, billion of these functions. I don't think that's true. There is actually a limited set (induced mainly by primops) of "difficult" functions that require new abstractions. Alloca is a pain only because it implemented as a `IO $ \s → PRIMOP_SPAGHETTI`. I don't know if the spaghetti can be twisted to look something like:
.. = mask $ \restore → do mbarr ← liftBase $ IO $ newAlignedPinnedByteArray# size align finally (...) $ do ... restore (action ptr)
It depends on the semantics of the primops involved. Fortunately, most of IO control operations can be easily generalized just by changing the type signature:
import qualified Control.Concurrent.MVar as MV
takeMVar ∷ MonadBase IO μ ⇒ MVar α → μ α takeMVar = liftBase . MV.takeMVar
putMVar ∷ MonadBase IO μ ⇒ MVar α → α → μ α putMVar v = liftBase . MV.putMVar v
withMVar ∷ MVar α → (α → IO β) → IO β -- withMVar ∷ (MonadBase IO μ, MonadFinally μ, MonadMask m μ) -- ⇒ MVar α → (α → μ β) → μ β -- works too, /without changing the body/! withMVar v f = mask $ \restore → do a ← takeMVar v restore (m a) `finally` putMVar v a
This `withMVar` would work as expected in IO, AIO, and transformer stacks on top of them.
OK, there are several points involved here.
First, we might wonder, how many operations fundamentally are resistant to that treatment? Well, we can look at the primop list:
catch# raiseIO# maskAsyncExceptions# maskUninterruptible# unmaskAsyncExceptions# atomically# catchRetry# catchSTM# check# fork# forkOn# finalizeWeak#
So, using the method you describe, we may be able to get away with thirteen typeclasses. Ok... (Notice, by the way, that finally# is not on this list! So if /this/ was what you were thinking, I was probably thrown off by the fact that you included typeclasses for both primitive functions as well as derived ones.)
Not thirteen: 1. MonadRecover covers catch# and catchSTM# 2. MonadAbort covers raiseIO# 3. MonadMask covers maskAsyncExceptions# + maskUninterruptible# + unmaskAsyncException# 4. MonadPlus covers catchRetry# 5. Some class for fork# and forkOn# (capability type can be abstracted the same way the mask type is abstracted in MonadMask) Lifting atomically# is simple: atomically ∷ MonadBase IO μ ⇒ STM α → μ α atomically = liftBase . STM.atomically check# and finalizeWeak# cannot be fully generalized because of their semantics. Suppose we want to lift `mkWeak` to `StateT s IO` manually, without relying on some generic mechanism. I just don't see any coherent meaning of accessing/modifying state in the finalizer. I would start with a partial generalization (leaving finalizer `IO ()`) and see if someone comes up with a not-trivial (a trivial one would be ReaderT) monad that actually properly implements the fully generalized version in a meaningful way. Regarding `finally`. I was certainly aware that it is not a primop, that's why I wrote "induced /mainly/ by primops". The generalization of `finally` is somewhat natural if you think about it. We start with IO, there the set of reasons why control can escape is fixed, then we proceed to MaybeT and a new "zero" pops up. Then we ask ourselves a question "what if we had more, possibly unrecoverable, failures/zeros?". In the end it boils down to changing `finally` documentation from "computation to run afterward (even if an exception was raised)" to "computation to run when control escapes".
Second, we might wonder, how tractable is this approach? Certainly, it gives us another principled way of lifting all of the "hard" functions, assuming that all of the primops are taken care of. Of course, there are a lot of objections:
- It requires copy pasting code (and if upstream changes their implementation, so must we). I constrast this with the lifted-base method, which, while annoying and repetitive, does not involve copypasted code.
Notice that copy-pasting is only needed for control operations, which are clearly a minority of the functions exported by `base`. All other functions could be lifted the same way lifted-base does it, with liftBase.
- Un-transforming primop'd code undos important performance optimizations
I think it would be wiser to invest time into improving GHC specializer/optimizer than to try to sidestep the issue by choosing poor-but-already-optimizable abstractions.
But I think there is a very important point to concede here, which is that without language support it may be impossible to implement 'generic' versions of these derived functions from the specialized ones automatically. lifted-base achieves the appearance of automatically lifting, but that's only because directly calling the original implementations is acceptable.
I think it is more about compiler/optimizer support than about /language/ support.
I hope you see that my approach is entirely different. I'm not interested in lifting IO operations we have in `base` by some clever ad-hoc machinery, I want to generalize (there possible) their types.
And the logical conclusion of this is that, not only do you need to create a function for every function you want to generalize, you also need to steal all of the implementations. Which suggests that actually you want to be talking to the GHC team about this endeavour, since the situation is a bit less bad if base is maintaining the generalized versions as well as the specialized ones: the specialized versions can just be inlined versions of the generalized ones.
Summary: 1. Exception handling and finalizers are generic concepts that make sense in many monads and therefore should not be tied to IO.
I disagree, and submit the modification: "Exception handling and finalizers are generic concepts that make sense in many IO-based monads with alternative control flow or semantics, and therefore should not be tied to IO-based monads that must precisely implement IO control flow." Exception handling is well understood for pure code, and finalizers unnecessary.
See my note about `ErrorT SomeException ST` above. It could be IO, it could be ST, it could be something you made in you garage this weekend. As long as there is something stateful involved, finalizers do have a meaning.
2. Regular IO functions can be generalized just by prefixing them with `liftBase $`. This will make them work in any `MonadBase IO μ`.
I disagree, and submit the modification: "Regular IO functions can be lifted without respect to control flow by prefixing them with liftBase. This will make them work in any `MonadBase IO mu'." Just because I can lift a function, doesn't mean it's been generalized; in particular, any more primitive functions it calls continue to be the original, and not generalized versions.
Agreed.
3. Most IO control operations can be generalized just by changing their type signatures to use MonadAbort/Recover/Finally/Mask (without changing the body; maybe with just a few `liftBase`s here and there). This will make them work at least in IO, AIO, and transformer stacks on top of them.
I agree, but submit that of the MonadAbort/Recover/Finally/Mask quartet Finally should be dropped and Abort/Recover/Mask unified into one typeclass.
I disagree. MonadFinally is important for having unrecoverable failures and MonadAbort/MonadRecover split is exactly the same as MonadZero/MonadPlus. If you really want to merge MonadMask, I recommend class (MonadRecover SomeException μ, Ord m, Bounded m) ⇒ MonadCatch m μ | μ → m where -- `h` /must/ be masked with `minBound ∷ m` by implementation. -- Notice that `recover m (mask_ . h)` won't do in general, as an -- asynchronous exception can arrive just before `mask_`. catch ∷ Exception e ⇒ μ α → (e → μ α) → μ α catch m h = recover m $ \e → maybe (throw e) h (fromException e) getMaskingState ∷ μ m getMaskingState = return minBound setMaskingState ∷ m → μ α → μ α setMaskingState = const id
4. IO control operations that rely on passing IO actions to a primop (like, presumably, `alloca`) should be generalized the monad-control way (which is OK, I don't see how I can lift it to AIO anyway, even if I try to do it manually). Partial generalizations like `alloca' ∷ (Storable α, MonadBase IO μ) ⇒ (Ptr α → IO β) → μ β` might also be useful.
I (surprisingly) disagree, and submit that they /can/ be generalized the copy pasting way, and if such a change is coordinated with the base teams, could be the preferred mechanism.
Yay! :)
Summary:
1. The only known semantics of asynchronous exceptions involves the primitives abort, recover and mask, and this semantics can be converted into one that is synchronous if we supply a no-op definition for mask and require the semantics stay the same. It seems poor for this semantics to grow to include finally or for this semantics to contract to have abort without recover, or recover without abort. But this is not a fundamental point, and while there are lots of different exception handling semantics, it's possible specialized applications could make sense with limited combinators: however, *show me the semantics.*
2. Finalizer handling is not necessary in pure code.
Hopefully I addressed both (1) and (2).
3. A way of cleaning up the IO sin bin would be to generalize appropriate primitive functions over appropriate type classes, and then copy pasting the source for all derived definitions. I submit that doing so as a third party is a bad idea. I submit that we can do this incrementally, and that it's not that bad of an idea if you can convince the GHC team it's a good idea. Exception handling might be a good place to start. (An issue to consider: what about the interaction of orthogonal features?)
I wouldn't be too optimistic about convincing GHC HQ. Even making Applicative a superclass of Monad can make Haskell98 nazis come after you in ninja suits. Regarding orthogonal features. What exactly do you have in mind?
4. (3) is the only way of getting an appropriate behavior for models of IO with weird control flow. So, I agree with you, monad-control is no good for AIO, and your essential idea is necessary. (Though, Anders claims that at some point he figured out how to fix the ContT problem for monad-peel; I should poke him again on this point.)
This is interesting.

On Tue, Jan 17, 2012 at 06:29, Mikhail Vorozhtsov < mikhail.vorozhtsov@gmail.com> wrote:
I wouldn't be too optimistic about convincing GHC HQ. Even making Applicative a superclass of Monad can make Haskell98 nazis come after you in ninja suits.
What?! The only significant complaint I've seen here is that the necessary language support for doing so without breaking more or less every Haskell program currently in existence is difficult to achieve. On the other hand, I suppose if you're sufficiently fanatical, breaking more or less every Haskell program currently in existence might be viewed as a minor issue that you think nobody has any right to be upset about. -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

On 01/18/2012 01:52 AM, Brandon Allbery wrote:
On Tue, Jan 17, 2012 at 06:29, Mikhail Vorozhtsov
mailto:mikhail.vorozhtsov@gmail.com> wrote: I wouldn't be too optimistic about convincing GHC HQ. Even making Applicative a superclass of Monad can make Haskell98 nazis come after you in ninja suits.
What?! The only significant complaint I've seen here is that the necessary language support for doing so without breaking more or less every Haskell program currently in existence is difficult to achieve. This is a /big/ exaggeration. What libraries exactly are going to be broken? Bytestring/Text (and all other data-structures-related libraries for that matter)? I don't think so. Network? Wouldn't be surprised if it goes without even a single patch. (Atto)Parsec/Binary/Cereal/*-Builder or Enumerator? A few type signatures (mainly contexts) and/or instances would need to be changed. Transformers? Some instances again. MTL? Looks surprisingly good.
I think with some coordinated effort we could switch the core libraries withing a week. On the client side of things I expect the change to go unnoticed by most people: by now virtually every custom monad has an Applicative instance. Personally, I wouldn't mind being hit by this hierarchy transformation, it is totally worth an hour of adjusting type contexts in my code (I'm currently maintaining ~17KLOC and I think I would need to touch only a handful of places).

On Wed, Jan 18, 2012 at 03:47, Mikhail Vorozhtsov < mikhail.vorozhtsov@gmail.com> wrote:
On 01/18/2012 01:52 AM, Brandon Allbery wrote:
On Tue, Jan 17, 2012 at 06:29, Mikhail Vorozhtsov
>> wrote: I wouldn't be too optimistic about convincing GHC HQ. Even making Applicative a superclass of Monad can make Haskell98 nazis come after you in ninja suits. What?! The only significant complaint I've seen here is that the necessary language support for doing so without breaking more or less every Haskell program currently in existence is difficult to achieve.
This is a /big/ exaggeration. What libraries exactly are going to be broken?
One of the big reasons the Haskell Platform release got delayed for so long is a similar breaking change (making the haskell98 package incompatible with base), which left a bunch of required libraries unable to build; it took quite a while for all of them to get updated. I expect the same thing to happen with the Functor, Applicative => Monad change. -- brandon s allbery allbery.b@gmail.com wandering unix systems administrator (available) (412) 475-9364 vm/sms

Excerpts from Mikhail Vorozhtsov's message of Tue Jan 17 06:29:12 -0500 2012:
The vehicle of implementation here is kind of important. If they are implemented as asynchronous exceptions, I can in fact still throw in this universe: I just attempt to execute the equivalent of 'undefined :: m a'. Since asynchronous exceptions can always be thrown from pure code, I can /always/ do this, no matter how you lock down the types. Indeed, I think implementing this functionality on asynchronous exceptions is a good idea, because it lets you handle nonterminating pure code nicely, and allows you to bail out even when you're not doing monadic execution. I don't like there this is going. Arguments like this destroy the whole point of having abstract interfaces. I took liftBase from you and now you are picking lock on my back door with raise#. I can deny this by hiding the constructor of the asynchronous exception I use for passing `lavel` in my implementation. But seriously. Next thing I know you will be sneaking down my chimney with `unsafePerformIO` in your hands. It is no question that the type system cannot protect us from all the tricks RTS provides, but we still can rely on conventions of use.
Personally I'm not a fan of exceptions in pure code. If something can fail it should be reflected in its type, otherwise I consider it a bug. The only scenario I'm comfortable with is using asynchronous exceptions to interrupt some number crunching.
Well, that's the kind of language we live in. The denotation of our language always permits for bottom values, and it's not a terribly far jump from there to undefined and error "foo". I don't consider the use of these facilities to be a trap door. I like to think of this another way: because in Haskell actually contains pure code, we should take advantage of this fact, and unceremoniously terminate pure code when we like: while also having a nice semantics for what this means, to boot. Hiding the constructor of your exception is clever, and means this argument is nullified if your exception system (or at least bits of it: more on this later0 doesn't handle all exceptions.
Hm, are you against splitting MonadPlus too?
The problem with MonadPlus is not the fact that it has mplus/mzero, but that there are in fact multiple disjoint sets of laws that instances obey. The only other point of order is that MonadZero is a useful abstraction by itself, and that's the point of debate.
You are forgetting about `ST`. For example, in `ErrorT SomeException ST` finalizers /do/ make sense. It's not about having IO, it is about having some sort of state(fulness).
Conceded. Although there are several responses: - We only have three magical base monads: IO, ST and STM. In ST we do not have any appreciable control over traditional IO exceptions, so the discussion there is strictly limited to pure mechanisms of failure. - Finalizing "mutable state" is a very limited use-case; unlike C++ we can't deallocate the state, unlike IO there are no external scarce resources, so the only thing you really might see is rolling back the state to some previous version, in which case you really ought not to be using ST for that purpose.
I think that's incoherent. To draw out your MaybeT IO example to its logical conclusion, you've just created two types of zeros, only one of which interacts with 'recover' but both of which interact with 'finally'. Down this inconsistency lies madness! Really, we'd like 'recover' to handle Nothing's: and actually we can: introduce a distinguished SomeException value that corresponds to nothings, and setup abort to transform that not into an IO exception but a pure Nothing value. Then 'finally' as written works. I see no inconsistency here. I just give implementers an opportunity to decide which failures are recoverable (with `recover`) and which are not, without sacrificing proper resource/state management. You approach rejects unrecoverable failures completely. Back to this particular case. I implemented `MonadRecover e (MaybeT μ)` this way because that's how I usually use `MaybeT IO`: `catch` for exceptions, `mplus` for `mzero`s. BTW that is also how STM works: `catchSTM` for exceptions, `orElse` for `retry`s. Ideally we should have different ways of recovering from different kinds of failures (and some kinds should not be allowed to be "thrown" by client code) in our abstract setting too. But I don't think that's easily expressible in the type system we have today (at least without confusing type inference). Injecting failures into exception hierarchy is too value-level for me.
You are free to create another interface that supports "unrecoverable" exceptions, and to supply appropriate semantics for this more complicated interface. However, I don't think it's appropriate to claim this interface is appropriate for IO style exceptions, which are (and users expect) to always be recoverable. For example, what happens in these cases: retry `recover` \e -> writeTVar t v retry `finally` writeTVar t v error "foo" `mplus` return () Tread carefully.
Not thirteen: 1. MonadRecover covers catch# and catchSTM# 2. MonadAbort covers raiseIO# 3. MonadMask covers maskAsyncExceptions# + maskUninterruptible# + unmaskAsyncException# 4. MonadPlus covers catchRetry# 5. Some class for fork# and forkOn# (capability type can be abstracted the same way the mask type is abstracted in MonadMask)
Lifting atomically# is simple:
atomically ∷ MonadBase IO μ ⇒ STM α → μ α atomically = liftBase . STM.atomically
check# and finalizeWeak# cannot be fully generalized because of their semantics. Suppose we want to lift `mkWeak` to `StateT s IO` manually, without relying on some generic mechanism. I just don't see any coherent meaning of accessing/modifying state in the finalizer. I would start with a partial generalization (leaving finalizer `IO ()`) and see if someone comes up with a not-trivial (a trivial one would be ReaderT) monad that actually properly implements the fully generalized version in a meaningful way.
Thanks for doing that legwork. I didn't do very much except go look at the primops list. :-)
Regarding `finally`. I was certainly aware that it is not a primop, that's why I wrote "induced /mainly/ by primops". The generalization of `finally` is somewhat natural if you think about it. We start with IO, there the set of reasons why control can escape is fixed, then we proceed to MaybeT and a new "zero" pops up. Then we ask ourselves a question "what if we had more, possibly unrecoverable, failures/zeros?". In the end it boils down to changing `finally` documentation from "computation to run afterward (even if an exception was raised)" to "computation to run when control escapes".
I think the distinction between recoverable and unrecoverable is reasonable, and could be the basis for an interesting typeclass. But the notion needs to be made a lot more precise.
Notice that copy-pasting is only needed for control operations, which are clearly a minority of the functions exported by `base`. All other functions could be lifted the same way lifted-base does it, with liftBase.
I think the number is nontrivial, despite being a minority. But I haven't run the numbers.
I think it would be wiser to invest time into improving GHC specializer/optimizer than to try to sidestep the issue by choosing poor-but-already-optimizable abstractions.
Worth thinking about.
I wouldn't be too optimistic about convincing GHC HQ. Even making Applicative a superclass of Monad can make Haskell98 nazis come after you in ninja suits.
Heh, Functor/Applicative/Monad is something of a mess.
Regarding orthogonal features. What exactly do you have in mind?
Essentially, every time the GHC team throws a new feature into the IO sin bin, they have to figure out how it interacts with all of the other features currently living in the IO sin bin. This is kind of tricky. A variant of this problem is why STM is really hard to implement in traditional languages. I think we're converging on agreement. Hooray! Cheers, Edward

Excerpts from Mikhail Vorozhtsov's message of Tue Jan 17 06:29:12 -0500 2012:
The vehicle of implementation here is kind of important. If they are implemented as asynchronous exceptions, I can in fact still throw in this universe: I just attempt to execute the equivalent of 'undefined :: m a'. Since asynchronous exceptions can always be thrown from pure code, I can /always/ do this, no matter how you lock down the types. Indeed, I think implementing this functionality on asynchronous exceptions is a good idea, because it lets you handle nonterminating pure code nicely, and allows you to bail out even when you're not doing monadic execution. I don't like there this is going. Arguments like this destroy the whole point of having abstract interfaces. I took liftBase from you and now you are picking lock on my back door with raise#. I can deny this by hiding the constructor of the asynchronous exception I use for passing `lavel` in my implementation. But seriously. Next thing I know you will be sneaking down my chimney with `unsafePerformIO` in your hands. It is no question that the type system cannot protect us from all the tricks RTS provides, but we still can rely on conventions of use.
Personally I'm not a fan of exceptions in pure code. If something can fail it should be reflected in its type, otherwise I consider it a bug. The only scenario I'm comfortable with is using asynchronous exceptions to interrupt some number crunching.
Well, that's the kind of language we live in. The denotation of our language always permits for bottom values, and it's not a terribly far jump from there to undefined and error "foo". I don't consider the use of these facilities to be a trap door. Non-termination is a bug (when termination is expected) and I wish that `undefined` and `error` would be interpreted as bugs too (when a value is expected). Putting asynchronous exceptions aside, in some situations it might be useful to recover from bugs, but they should not be treated
On 01/18/2012 02:45 AM, Edward Z. Yang wrote: like /errors/, something that is expected to happen. At least I like to think this way when `error`s meet monads. For example, what is the meaning of `error` in this piece: nasty ∷ Monad μ ⇒ μ () nasty = error "FOO" >> return () runIdentity nasty ~> () -- error is not necessarily a left zero! runIdentity $ runMaybeT nasty ~> error It's just slipping through abstraction and doing what it wants.
Hm, are you against splitting MonadPlus too?
The problem with MonadPlus is not the fact that it has mplus/mzero, but that there are in fact multiple disjoint sets of laws that instances obey. The only other point of order is that MonadZero is a useful abstraction by itself, and that's the point of debate. What is the "usefulness" here? Is being precise not enough?
contract ∷ MonadZero μ ⇒ (α → Bool) → (β → Bool) → (α → μ β) → α → μ β contract pre post body x = do unless (pre x) mzero y ← body x unless (post y) mzero return y Why would I drag `mplus` here? `contract` is useful regardless of whether you have a choice operation or not.
You are forgetting about `ST`. For example, in `ErrorT SomeException ST` finalizers /do/ make sense. It's not about having IO, it is about having some sort of state(fulness).
Conceded. Although there are several responses:
- We only have three magical base monads: IO, ST and STM. In ST we do not have any appreciable control over traditional IO exceptions, so the discussion there is strictly limited to pure mechanisms of failure.
Why is this distinction necessary? Why are you trying to tie exception handling idioms to the particular implementation in RTS?
- Finalizing "mutable state" is a very limited use-case; unlike C++ we can't deallocate the state, unlike IO there are no external scarce resources, so the only thing you really might see is rolling back the state to some previous version, in which case you really ought not to be using ST for that purpose.
Maybe. But you can. And who said that we should consider only IO, ST and STM? Maybe it is a mysterious stateful monad X keeping tabs on god-knows-what. Also, even though we do not deallocate memory directly, having a reference to some gigantic data structure by mistake could hurt too.
I think that's incoherent. To draw out your MaybeT IO example to its logical conclusion, you've just created two types of zeros, only one of which interacts with 'recover' but both of which interact with 'finally'. Down this inconsistency lies madness! Really, we'd like 'recover' to handle Nothing's: and actually we can: introduce a distinguished SomeException value that corresponds to nothings, and setup abort to transform that not into an IO exception but a pure Nothing value. Then 'finally' as written works. I see no inconsistency here. I just give implementers an opportunity to decide which failures are recoverable (with `recover`) and which are not, without sacrificing proper resource/state management. You approach rejects unrecoverable failures completely. Back to this particular case. I implemented `MonadRecover e (MaybeT μ)` this way because that's how I usually use `MaybeT IO`: `catch` for exceptions, `mplus` for `mzero`s. BTW that is also how STM works: `catchSTM` for exceptions, `orElse` for `retry`s. Ideally we should have different ways of recovering from different kinds of failures (and some kinds should not be allowed to be "thrown" by client code) in our abstract setting too. But I don't think that's easily expressible in the type system we have today (at least without confusing type inference). Injecting failures into exception hierarchy is too value-level for me.
You are free to create another interface that supports "unrecoverable" exceptions, and to supply appropriate semantics for this more complicated interface. However, I don't think it's appropriate to claim this interface is appropriate for IO style exceptions, which are (and users expect) to always be recoverable.
Why exactly not? I think that everything useful written with this assumption in mind can be rewritten to use `finally`, just like I did with `withMVar` (the version in `base` actually uses `onException`).
For example, what happens in these cases:
retry `recover` \e -> writeTVar t v retry retry `finally` writeTVar t v instance MonadFinally STM where finally' m f = do r ← catchSTM (Right <$> m) (return . Left) `mplus` (f Nothing >> retry) either (\e → f Nothing >> throwSTM e) (\a → (a, ) <$> f (Just a)) r
~> writeTVar t v >> retry ~> retry retry+finally may actually not be quite as useless as it seems. The finalizer could contain IORef manipulations (collecting some statistics, implementing custom retry strategies, etc).
error "foo" `mplus` return ()
It depends. In STM and Maybe it would be `error "foo"`. In some right-biased monad it could be `return ()`. But that's OK, because `mplus` is required to have `mzero` as its zero, everything (other kinds of failures; and `error "foo"` is not necessarily a failure!) on top of that is implementation dependent. The same goes for `recover`: everything on top of handling `abort` is implementation dependent. [snip]
Regarding `finally`. I was certainly aware that it is not a primop, that's why I wrote "induced /mainly/ by primops". The generalization of `finally` is somewhat natural if you think about it. We start with IO, there the set of reasons why control can escape is fixed, then we proceed to MaybeT and a new "zero" pops up. Then we ask ourselves a question "what if we had more, possibly unrecoverable, failures/zeros?". In the end it boils down to changing `finally` documentation from "computation to run afterward (even if an exception was raised)" to "computation to run when control escapes".
I think the distinction between recoverable and unrecoverable is reasonable, and could be the basis for an interesting typeclass. But the notion needs to be made a lot more precise. It is actually not about creating a new typeclass (what methods would it have anyway?), it is about relaxing rules for `recover`.
`recover` is interdefinable with `eval`: eval m = recover (Right <$> m) (return . Left) recover m f = eval m >>= either (\e → f e >> abort e) return We would probably both agree that eval (abort e) = return (Left e) and (if we forget about asynchronous exceptions) eval (return a) = return (Right a) eval (m >>= f) = eval m >>= either (return . Left) (eval . f) What about `eval (error "FOO")`? Sometimes it is `error "FOO"`, sometimes `return $ Left (ErrorCall "FOO")`, sometimes it is a pure nonsense (interpreter goes bananas). What about `eval mzero`? My answer is "I want precision, so I will leave it as implementation dependent". Otherwise I would be forced to use `catchSTM`/`Control.Exception.catch` instead of the generic `catch` when I work with STM/`MaybeT IO`. Writing catches (...) [Handler \MZeroException -> mzero, Handler \(e ∷ SomeException) -> ...] is just ugly.

Excerpts from Mikhail Vorozhtsov's message of Wed Jan 18 08:47:37 -0500 2012:
Well, that's the kind of language we live in. The denotation of our language always permits for bottom values, and it's not a terribly far jump from there to undefined and error "foo". I don't consider the use of these facilities to be a trap door. Non-termination is a bug (when termination is expected) and I wish that `undefined` and `error` would be interpreted as bugs too (when a value is expected). Putting asynchronous exceptions aside, in some situations it might be useful to recover from bugs, but they should not be treated like /errors/, something that is expected to happen. At least I like to think this way when `error`s meet monads. For example, what is the meaning of `error` in this piece:
nasty ∷ Monad μ ⇒ μ () nasty = error "FOO" >> return ()
runIdentity nasty ~> () -- error is not necessarily a left zero! runIdentity $ runMaybeT nasty ~> error
It's just slipping through abstraction and doing what it wants.
I can't argue with "error should be used sparingly, and usually in cases where there is an indication of developer error, rather than user error." It's good, sound advice. But I also believe that you can't use this as justification to stick your head in the sand, and pretend bottoms don't exist (regardless of whether or not we'rd talking about asynchronous exceptions.) The reason is that understanding how code behaves in the presence of bottoms tells you some very important information about its strictness/laziness, and this information is very important for managing the time and space usage of your code. The identity monad for which error "FOO" is a left zero is a legitimate monad: it's the strict identity monad (also known as the 'Eval' monad.) Treatment of bottom is a part of your abstraction! (I previously claimed that we could always use undefined :: m a as a left zero, I now stand corrected: this statement only holds for 'strict' monads, a moniker which describes IO, STM and ST, and any monads based on them. Indeed, I'll stick my neck out and claim any monad which can witness bottoms must be strict.)
What is the "usefulness" here? Is being precise not enough?
contract ∷ MonadZero μ ⇒ (α → Bool) → (β → Bool) → (α → μ β) → α → μ β contract pre post body x = do unless (pre x) mzero y ← body x unless (post y) mzero return y
Why would I drag `mplus` here? `contract` is useful regardless of whether you have a choice operation or not.
Point conceded. (Though, I probably would have used 'error' for 'contract', since violation of pre/post-conditions is almost certainly due to developer error.)
- We only have three magical base monads: IO, ST and STM. In ST we do not have any appreciable control over traditional IO exceptions, so the discussion there is strictly limited to pure mechanisms of failure.
Why is this distinction necessary? Why are you trying to tie exception handling idioms to the particular implementation in RTS?
The distinction I'm trying to make is between code that is pure (and cannot witness bottoms), and code that is impure, and *can* witness bottoms. It is true that I need language/RTS support to do the latter, but I'm in no way tying myself to a particular implementation of an RTS: the semantics are independent (and indeed are implemented in all of the other Haskell implementations.)
- Finalizing "mutable state" is a very limited use-case; unlike C++ we can't deallocate the state, unlike IO there are no external scarce resources, so the only thing you really might see is rolling back the state to some previous version, in which case you really ought not to be using ST for that purpose.
Maybe. But you can. And who said that we should consider only IO, ST and STM? Maybe it is a mysterious stateful monad X keeping tabs on god-knows-what. Also, even though we do not deallocate memory directly, having a reference to some gigantic data structure by mistake could hurt too.
Claim: such a mysterious monad would have to be backed by IO/ST. (In the case of a pure State monad, once we exit the monad all of that gets garbage collected.)
You are free to create another interface that supports "unrecoverable" exceptions, and to supply appropriate semantics for this more complicated interface. However, I don't think it's appropriate to claim this interface is appropriate for IO style exceptions, which are (and users expect) to always be recoverable. Why exactly not? I think that everything useful written with this assumption in mind can be rewritten to use `finally`, just like I did with `withMVar` (the version in `base` actually uses `onException`).
I think the argument here is similar to your argument: saying that all IO exceptions are recoverable is more precise. An interface that specifies recoverable exceptions is more precise than an interface that specifies recoverable *and* unrecoverable exceptions.
For example, what happens in these cases:
retry `recover` \e -> writeTVar t v retry retry `finally` writeTVar t v instance MonadFinally STM where finally' m f = do r ← catchSTM (Right <$> m) (return . Left) `mplus` (f Nothing >> retry) either (\e → f Nothing >> throwSTM e) (\a → (a, ) <$> f (Just a)) r
~> writeTVar t v >> retry ~> retry
retry+finally may actually not be quite as useless as it seems. The finalizer could contain IORef manipulations (collecting some statistics, implementing custom retry strategies, etc).
error "foo" `mplus` return ()
It depends. In STM and Maybe it would be `error "foo"`. In some right-biased monad it could be `return ()`. But that's OK, because `mplus` is required to have `mzero` as its zero, everything (other kinds of failures; and `error "foo"` is not necessarily a failure!) on top of that is implementation dependent. The same goes for `recover`: everything on top of handling `abort` is implementation dependent.
Playing along with the recoverable/unrecoverable idea for a bit, I think I agree with your assessments for the first two cases (the second is a bit weird but I agree that it could be useful.) But I'm very worried about your refusal to give a law for the third case. In some ways, this is exactly how we got into the MonadPlus mess in the first place.
I think the distinction between recoverable and unrecoverable is reasonable, and could be the basis for an interesting typeclass. But the notion needs to be made a lot more precise. It is actually not about creating a new typeclass (what methods would it have anyway?), it is about relaxing rules for `recover`.
MonadOr and MonadPlus have functions with the same type signatures, but they are named differently to avoid confusion. I suggest something like that would be useful here.
`recover` is interdefinable with `eval`:
eval m = recover (Right <$> m) (return . Left) recover m f = eval m >>= either (\e → f e >> abort e) return
We would probably both agree that
eval (abort e) = return (Left e)
and (if we forget about asynchronous exceptions)
eval (return a) = return (Right a) eval (m >>= f) = eval m >>= either (return . Left) (eval . f)
What about `eval (error "FOO")`? Sometimes it is `error "FOO"`, sometimes `return $ Left (ErrorCall "FOO")`, sometimes it is a pure nonsense (interpreter goes bananas).
What about `eval mzero`? My answer is "I want precision, so I will leave it as implementation dependent". Otherwise I would be forced to use `catchSTM`/`Control.Exception.catch` instead of the generic `catch` when I work with STM/`MaybeT IO`. Writing
I think my previous comments apply here.
catches (...) [Handler \MZeroException -> mzero, Handler \(e ∷ SomeException) -> ...]
is just ugly.
I agree that this is ugly. (But, I admit, it does have a sort of theoretical unity about it.) Edward

Excerpts from Mikhail Vorozhtsov's message of Wed Jan 18 08:47:37 -0500 2012:
Well, that's the kind of language we live in. The denotation of our language always permits for bottom values, and it's not a terribly far jump from there to undefined and error "foo". I don't consider the use of these facilities to be a trap door. Non-termination is a bug (when termination is expected) and I wish that `undefined` and `error` would be interpreted as bugs too (when a value is expected). Putting asynchronous exceptions aside, in some situations it might be useful to recover from bugs, but they should not be treated like /errors/, something that is expected to happen. At least I like to think this way when `error`s meet monads. For example, what is the meaning of `error` in this piece:
nasty ∷ Monad μ ⇒ μ () nasty = error "FOO">> return ()
runIdentity nasty ~> () -- error is not necessarily a left zero! runIdentity $ runMaybeT nasty ~> error
It's just slipping through abstraction and doing what it wants.
I can't argue with "error should be used sparingly, and usually in cases where there is an indication of developer error, rather than user error." It's good, sound advice.
But I also believe that you can't use this as justification to stick your head in the sand, and pretend bottoms don't exist (regardless of whether or not we'rd talking about asynchronous exceptions.) The reason is that understanding how code behaves in the presence of bottoms tells you some very important information about its strictness/laziness, and this information is very important for managing the time and space usage of your code. I totally agree with you. My point is that things like `evaluate` and `try undefined = return (Left (ErrorCall "Prelude.undefined"))` are magic and should not be taken for granted. Bottoms are everywhere, but
On 01/18/2012 10:43 PM, Edward Z. Yang wrote: the ability to distinguish them from normal values is special. We could have a separate abstraction for this ability: class MonadAbort SomeException μ ⇒ MonadUnbottom μ where -- evaluate a = abort (toException e), if WHNF(a) = throw e -- return WHNF(a), otherwise -- join (evaluate m) = m, ensures that `undefined ∷ μ α = abort ...` evaluate ∷ α → μ α or something like that.
The identity monad for which error "FOO" is a left zero is a legitimate monad: it's the strict identity monad (also known as the 'Eval' monad.) Treatment of bottom is a part of your abstraction!
(I previously claimed that we could always use undefined :: m a as a left zero, I now stand corrected: this statement only holds for 'strict' monads, a moniker which describes IO, STM and ST, and any monads based on them. Indeed, I'll stick my neck out and claim any monad which can witness bottoms must be strict.)
Bottoms may be zeros in strict monads, but they are not necessarily recoverable. `runX (recover (True <$ undefined) (const $ return False))` may be equivalent to `undefined`, not to `runX (return False)`. See my example of such "IO based" X below. I want to have options. [snip]
- We only have three magical base monads: IO, ST and STM. In ST we do not have any appreciable control over traditional IO exceptions, so the discussion there is strictly limited to pure mechanisms of failure.
Why is this distinction necessary? Why are you trying to tie exception handling idioms to the particular implementation in RTS?
The distinction I'm trying to make is between code that is pure (and cannot witness bottoms), and code that is impure, and *can* witness bottoms. It is true that I need language/RTS support to do the latter, but I'm in no way tying myself to a particular implementation of an RTS: the semantics are independent (and indeed are implemented in all of the other Haskell implementations.)
- Finalizing "mutable state" is a very limited use-case; unlike C++ we can't deallocate the state, unlike IO there are no external scarce resources, so the only thing you really might see is rolling back the state to some previous version, in which case you really ought not to be using ST for that purpose.
Maybe. But you can. And who said that we should consider only IO, ST and STM? Maybe it is a mysterious stateful monad X keeping tabs on god-knows-what. Also, even though we do not deallocate memory directly, having a reference to some gigantic data structure by mistake could hurt too.
Claim: such a mysterious monad would have to be backed by IO/ST. (In the case of a pure State monad, once we exit the monad all of that gets garbage collected.) Let's consider the following X (using the `operational` library):
You are free to create another interface that supports "unrecoverable" exceptions, and to supply appropriate semantics for this more complicated interface. However, I don't think it's appropriate to claim this interface is appropriate for IO style exceptions, which are (and users expect) to always be recoverable. Why exactly not? I think that everything useful written with this assumption in mind can be rewritten to use `finally`, just like I did with `withMVar` (the version in `base` actually uses `onException`).
I think the argument here is similar to your argument: saying that all IO exceptions are recoverable is more precise. An interface that specifies recoverable exceptions is more precise than an interface that specifies recoverable *and* unrecoverable exceptions. There is a crucial difference here. In the MonadZero case, being more
data Command α where -- Note that we do not employ exceptions here. If command stream -- transport fails, the interpreter is supposed to start the -- appropriate emergency procedures (or switch to a backup channel). -- Sending more commands wouldn't help anyway. AdjustControlRods ∷ Height → Command Bool AdjustCoolantFlow ∷ ... AdjustSecondaryCoolantFlow ∷ ... RingTheAlarm ∷ ... -- We could even add an unrecoverable (/in the Controller monad/) -- error and the corresponding "finally" command. ... ReadCoolantFlow ∷ ... ... type Controller = Program Command -- Run a simulation simulate ∷ PowerPlantState → Controller α → (α, PowerPlantState) -- Run for real run ∷ Controller α → IO α type X = ErrorT SomeException Controller So the effects here are decoupled from control operations. Would you still say that finalizers are useless here because exception handling is implemented by pure means? precise means having a smaller language (only `mzero`, no `mplus`) and less laws. Which means that the code you write is more general (applicable to a greater number of monads). But in the recoverable/unrecoverable case being more precise means having /more/ laws (all zeros vs only some zeros are recoverable). Therefore the code you write is less general (applicable to a lesser number of monads). Having no unrecoverable errors is a special case of maybe having some. I think that in general we should prefer a more widely applicable API, going into specifics only when we actually need them.
For example, what happens in these cases:
retry `recover` \e -> writeTVar t v retry retry `finally` writeTVar t v instance MonadFinally STM where finally' m f = do r ← catchSTM (Right<$> m) (return . Left) `mplus` (f Nothing>> retry) either (\e → f Nothing>> throwSTM e) (\a → (a, )<$> f (Just a)) r
~> writeTVar t v>> retry ~> retry
retry+finally may actually not be quite as useless as it seems. The finalizer could contain IORef manipulations (collecting some statistics, implementing custom retry strategies, etc).
error "foo" `mplus` return ()
It depends. In STM and Maybe it would be `error "foo"`. In some right-biased monad it could be `return ()`. But that's OK, because `mplus` is required to have `mzero` as its zero, everything (other kinds of failures; and `error "foo"` is not necessarily a failure!) on top of that is implementation dependent. The same goes for `recover`: everything on top of handling `abort` is implementation dependent.
Playing along with the recoverable/unrecoverable idea for a bit, I think I agree with your assessments for the first two cases (the second is a bit weird but I agree that it could be useful.) But I'm very worried about your refusal to give a law for the third case. In some ways, this is exactly how we got into the MonadPlus mess in the first place. I'm not sure I understand. Neither of MonadOr/MonadPlus has laws about `error`, so the behavior is implementation dependent for both `mplus` and `morelse`.
I think the distinction between recoverable and unrecoverable is reasonable, and could be the basis for an interesting typeclass. But the notion needs to be made a lot more precise. It is actually not about creating a new typeclass (what methods would it have anyway?), it is about relaxing rules for `recover`.
MonadOr and MonadPlus have functions with the same type signatures, but they are named differently to avoid confusion. I suggest something like that would be useful here. The problem with MonadPlus and MonadOr is that their respective laws are incomparable. As I already mentioned, "no unrecoverable errors" is a special case of "maybe some unrecoverable errors", so we have a nice inclusion relation. We could have a "marker" (no new methods, just laws) typeclass for monads with no unrecoverable errors. But I suspect it would be rarely used in the presence of `finally`.
`recover` is interdefinable with `eval`:
eval m = recover (Right<$> m) (return . Left) recover m f = eval m>>= either (\e → f e>> abort e) return
We would probably both agree that
eval (abort e) = return (Left e)
and (if we forget about asynchronous exceptions)
eval (return a) = return (Right a) eval (m>>= f) = eval m>>= either (return . Left) (eval . f)
What about `eval (error "FOO")`? Sometimes it is `error "FOO"`, sometimes `return $ Left (ErrorCall "FOO")`, sometimes it is a pure nonsense (interpreter goes bananas).
What about `eval mzero`? My answer is "I want precision, so I will leave it as implementation dependent". Otherwise I would be forced to use `catchSTM`/`Control.Exception.catch` instead of the generic `catch` when I work with STM/`MaybeT IO`. Writing
I think my previous comments apply here.
catches (...) [Handler \MZeroException -> mzero, Handler \(e ∷ SomeException) -> ...]
is just ugly.
I agree that this is ugly. (But, I admit, it does have a sort of theoretical unity about it.)
So, to sum things up. I want a generic (works with weird interpreters too), modular (several typeclasses) API for error handling. You want to squash all the typeclasses into one, staying as close to IO exceptions as possible and merging MonadPlus in (where applicable). Am I correct? Hopefully I showed that there are some useful monads that do not fit into this "one class to rule them all" approach.

Excerpts from Mikhail Vorozhtsov's message of Sat Jan 21 09:25:07 -0500 2012:
But I also believe that you can't use this as justification to stick your head in the sand, and pretend bottoms don't exist (regardless of whether or not we'rd talking about asynchronous exceptions.) The reason is that understanding how code behaves in the presence of bottoms tells you some very important information about its strictness/laziness, and this information is very important for managing the time and space usage of your code. I totally agree with you. My point is that things like `evaluate` and `try undefined = return (Left (ErrorCall "Prelude.undefined"))` are magic and should not be taken for granted. Bottoms are everywhere, but the ability to distinguish them from normal values is special. We could have a separate abstraction for this ability:
class MonadAbort SomeException μ ⇒ MonadUnbottom μ where -- evaluate a = abort (toException e), if WHNF(a) = throw e -- return WHNF(a), otherwise -- join (evaluate m) = m, ensures that `undefined ∷ μ α = abort ...` evaluate ∷ α → μ α
or something like that.
Sure, but note that evaluate for IO is implemented with seq# under the hood, so as long as you actually get ordering in your monad it's fairly straightforward to implement evaluate. (Remember that the ability to /catch/ an error thrown by evaluate is separate from the ability to /evaluate/ a thunk which might throw an error.)
The identity monad for which error "FOO" is a left zero is a legitimate monad: it's the strict identity monad (also known as the 'Eval' monad.) Treatment of bottom is a part of your abstraction!
(I previously claimed that we could always use undefined :: m a as a left zero, I now stand corrected: this statement only holds for 'strict' monads, a moniker which describes IO, STM and ST, and any monads based on them. Indeed, I'll stick my neck out and claim any monad which can witness bottoms must be strict.) Bottoms may be zeros in strict monads, but they are not necessarily recoverable. `runX (recover (True <$ undefined) (const $ return False))` may be equivalent to `undefined`, not to `runX (return False)`. See my example of such "IO based" X below. I want to have options.
I think this touches on a key disagreement, which is that I think that in IO-like monads you need to be able to recover from bottoms. See below.
Let's consider the following X (using the `operational` library):
data Command α where -- Note that we do not employ exceptions here. If command stream -- transport fails, the interpreter is supposed to start the -- appropriate emergency procedures (or switch to a backup channel). -- Sending more commands wouldn't help anyway. AdjustControlRods ∷ Height → Command Bool AdjustCoolantFlow ∷ ... AdjustSecondaryCoolantFlow ∷ ... RingTheAlarm ∷ ... -- We could even add an unrecoverable (/in the Controller monad/) -- error and the corresponding "finally" command. ... ReadCoolantFlow ∷ ... ...
type Controller = Program Command
-- Run a simulation simulate ∷ PowerPlantState → Controller α → (α, PowerPlantState) -- Run for real run ∷ Controller α → IO α
type X = ErrorT SomeException Controller
So the effects here are decoupled from control operations. Would you still say that finalizers are useless here because exception handling is implemented by pure means?
I think your simulation is incomplete. Let's make this concrete: suppose I'm running one of these programs, and I type ^C (because I want to stop the program and do something else.) In normal 'run' operation, I would expect the program to run some cleanup operations and then exit. But there's no way for the simulation to do that! We've lost something here.
You are free to create another interface that supports "unrecoverable" exceptions, and to supply appropriate semantics for this more complicated interface. However, I don't think it's appropriate to claim this interface is appropriate for IO style exceptions, which are (and users expect) to always be recoverable. Why exactly not? I think that everything useful written with this assumption in mind can be rewritten to use `finally`, just like I did with `withMVar` (the version in `base` actually uses `onException`).
I think the argument here is similar to your argument: saying that all IO exceptions are recoverable is more precise. An interface that specifies recoverable exceptions is more precise than an interface that specifies recoverable *and* unrecoverable exceptions. There is a crucial difference here. In the MonadZero case, being more precise means having a smaller language (only `mzero`, no `mplus`) and less laws. Which means that the code you write is more general (applicable to a greater number of monads). But in the recoverable/unrecoverable case being more precise means having /more/ laws (all zeros vs only some zeros are recoverable). Therefore the code you write is less general (applicable to a lesser number of monads). Having no unrecoverable errors is a special case of maybe having some.
Stepping back for a moment, I think the conversation here would be helped if we dropped loaded terms like "general" and "precise" and talked about concrete properties: - A typeclass has more/less laws (equivalently, the typeclass constrains what else an object can do, outside of an instance), - A typeclass requires an instance to support more/less operations, - A typeclass can be implemented for more/less objects One important point is that "general" is not necessarily "good". For example, imagine I have a monad typeclass that has the "referential transparency" law (why are you using a monad?! Well, never mind that for now.) Obviously, the IO monad cannot be validly be an instance of this typeclass. But despite only admitting instances for a subset of monads, being "less general", I think most people who've bought into Haskell agree, referentially transparent code is good code! This is the essential tension of generality and specificity: if it's too general, "anything goes", but if it's too specific, it lacks elegance. So, there is a definitive and tangible difference between "all bottoms are recoverable" and "some bottoms are recoverable." The former corresponds to an extra law along the lines of "I can always catch exceptions." This makes reduces the number of objects the typeclass can be implemented for (or, if you may, it reduces the number of admissible implementations for the typeclass), but I would like to defend this as good, much like referential transparency is a good restriction. But the inclusion relation goes the wrong direction. Perhaps this rephrasing of your last sentence makes this clear:
Having no side effects is a special case of maybe having some.
:-)
For example, what happens in these cases:
retry `recover` \e -> writeTVar t v retry retry `finally` writeTVar t v instance MonadFinally STM where finally' m f = do r ← catchSTM (Right<$> m) (return . Left) `mplus` (f Nothing>> retry) either (\e → f Nothing>> throwSTM e) (\a → (a, )<$> f (Just a)) r
~> writeTVar t v>> retry ~> retry
retry+finally may actually not be quite as useless as it seems. The finalizer could contain IORef manipulations (collecting some statistics, implementing custom retry strategies, etc).
error "foo" `mplus` return ()
It depends. In STM and Maybe it would be `error "foo"`. In some right-biased monad it could be `return ()`. But that's OK, because `mplus` is required to have `mzero` as its zero, everything (other kinds of failures; and `error "foo"` is not necessarily a failure!) on top of that is implementation dependent. The same goes for `recover`: everything on top of handling `abort` is implementation dependent.
Playing along with the recoverable/unrecoverable idea for a bit, I think I agree with your assessments for the first two cases (the second is a bit weird but I agree that it could be useful.) But I'm very worried about your refusal to give a law for the third case. In some ways, this is exactly how we got into the MonadPlus mess in the first place. I'm not sure I understand. Neither of MonadOr/MonadPlus has laws about `error`, so the behavior is implementation dependent for both `mplus` and `morelse`.
Well, in that case, recover/finally are being awfully nosy sticking their laws into non-bottom zeros. :^)
I think the distinction between recoverable and unrecoverable is reasonable, and could be the basis for an interesting typeclass. But the notion needs to be made a lot more precise. It is actually not about creating a new typeclass (what methods would it have anyway?), it is about relaxing rules for `recover`.
MonadOr and MonadPlus have functions with the same type signatures, but they are named differently to avoid confusion. I suggest something like that would be useful here. The problem with MonadPlus and MonadOr is that their respective laws are incomparable. As I already mentioned, "no unrecoverable errors" is a special case of "maybe some unrecoverable errors", so we have a nice inclusion relation. We could have a "marker" (no new methods, just laws) typeclass for monads with no unrecoverable errors. But I suspect it would be rarely used in the presence of `finally`.
I disagreed.
So, to sum things up. I want a generic (works with weird interpreters too), modular (several typeclasses) API for error handling. You want to squash all the typeclasses into one, staying as close to IO exceptions as possible and merging MonadPlus in (where applicable). Am I correct? Hopefully I showed that there are some useful monads that do not fit into this "one class to rule them all" approach.
That's close, although misses some subtleties I developed in this message. I want to squash all the typeclasses into one, staying as close to IO exceptions as possible. This is because bottom is special, and I think it's worth giving a typeclass for handling it specially. Let's call this typeclass MonadException. MonadPlus is a fine typeclass and can be left as is. I don't think MonadException should interact with MonadPlus zeros. In fact, I don't even think IO should really be MonadPlus. I also think that unrecoverable/recoverable exceptions is a legitimate idea. I think it could get its own typeclass, let's call it MonadUnrecoverableException. I don't think any MonadException is automatically a MonadUnrecoverableException, by appealing to the laws of MonadException. I make no claim about whether or not a modular API would make sense for the unrecoverable/recoverable exception idea. Maybe it does, I haven't thought hard enough about it. However, I definitely object to such an API being used in a generic fashion, for ordinary IO as well. Cheers, Edward

On 01/22/2012 02:47 AM, Edward Z. Yang wrote:
Excerpts from Mikhail Vorozhtsov's message of Sat Jan 21 09:25:07 -0500 2012:
But I also believe that you can't use this as justification to stick your head in the sand, and pretend bottoms don't exist (regardless of whether or not we'rd talking about asynchronous exceptions.) The reason is that understanding how code behaves in the presence of bottoms tells you some very important information about its strictness/laziness, and this information is very important for managing the time and space usage of your code. I totally agree with you. My point is that things like `evaluate` and `try undefined = return (Left (ErrorCall "Prelude.undefined"))` are magic and should not be taken for granted. Bottoms are everywhere, but the ability to distinguish them from normal values is special. We could have a separate abstraction for this ability:
class MonadAbort SomeException μ ⇒ MonadUnbottom μ where -- evaluate a = abort (toException e), if WHNF(a) = throw e -- return WHNF(a), otherwise -- join (evaluate m) = m, ensures that `undefined ∷ μ α = abort ...` evaluate ∷ α → μ α
or something like that.
Sure, but note that evaluate for IO is implemented with seq# under the hood, so as long as you actually get ordering in your monad it's fairly straightforward to implement evaluate. (Remember that the ability to /catch/ an error thrown by evaluate is separate from the ability to /evaluate/ a thunk which might throw an error.) Yes, of course. The purpose of MonadUnbottom is to guarantee that `Control.Exception.throw e ∷ μ α = abort (toException e)`. The choice of a class method is somewhat arbitrary here (one could go with 'α → μ (Either SomeException α)` or with no methods at all).
The identity monad for which error "FOO" is a left zero is a legitimate monad: it's the strict identity monad (also known as the 'Eval' monad.) Treatment of bottom is a part of your abstraction!
(I previously claimed that we could always use undefined :: m a as a left zero, I now stand corrected: this statement only holds for 'strict' monads, a moniker which describes IO, STM and ST, and any monads based on them. Indeed, I'll stick my neck out and claim any monad which can witness bottoms must be strict.) Bottoms may be zeros in strict monads, but they are not necessarily recoverable. `runX (recover (True<$ undefined) (const $ return False))` may be equivalent to `undefined`, not to `runX (return False)`. See my example of such "IO based" X below. I want to have options.
I think this touches on a key disagreement, which is that I think that in IO-like monads you need to be able to recover from bottoms. See below.
Let's consider the following X (using the `operational` library):
data Command α where -- Note that we do not employ exceptions here. If command stream -- transport fails, the interpreter is supposed to start the -- appropriate emergency procedures (or switch to a backup channel). -- Sending more commands wouldn't help anyway. AdjustControlRods ∷ Height → Command Bool AdjustCoolantFlow ∷ ... AdjustSecondaryCoolantFlow ∷ ... RingTheAlarm ∷ ... -- We could even add an unrecoverable (/in the Controller monad/) -- error and the corresponding "finally" command. ... ReadCoolantFlow ∷ ... ...
type Controller = Program Command
-- Run a simulation simulate ∷ PowerPlantState → Controller α → (α, PowerPlantState) -- Run for real run ∷ Controller α → IO α
type X = ErrorT SomeException Controller
So the effects here are decoupled from control operations. Would you still say that finalizers are useless here because exception handling is implemented by pure means?
I think your simulation is incomplete. Let's make this concrete: suppose I'm running one of these programs, and I type ^C (because I want to stop the program and do something else.) In normal 'run' operation, I would expect the program to run some cleanup operations and then exit. But there's no way for the simulation to do that! We've lost something here. I'm not sure I would want to go ^C on a power plant controlling software, but OK. We could accommodate external interruptions by:
You are free to create another interface that supports "unrecoverable" exceptions, and to supply appropriate semantics for this more complicated interface. However, I don't think it's appropriate to claim this interface is appropriate for IO style exceptions, which are (and users expect) to always be recoverable. Why exactly not? I think that everything useful written with this assumption in mind can be rewritten to use `finally`, just like I did with `withMVar` (the version in `base` actually uses `onException`).
I think the argument here is similar to your argument: saying that all IO exceptions are recoverable is more precise. An interface that specifies recoverable exceptions is more precise than an interface that specifies recoverable *and* unrecoverable exceptions. There is a crucial difference here. In the MonadZero case, being more precise means having a smaller language (only `mzero`, no `mplus`) and less laws. Which means that the code you write is more general (applicable to a greater number of monads). But in the recoverable/unrecoverable case being more precise means having /more/ laws (all zeros vs only some zeros are recoverable). Therefore the code you write is less general (applicable to a lesser number of monads). Having no unrecoverable errors is a special case of maybe having some.
Stepping back for a moment, I think the conversation here would be helped if we dropped loaded terms like "general" and "precise" and talked about concrete properties:
- A typeclass has more/less laws (equivalently, the typeclass constrains what else an object can do, outside of an instance), - A typeclass requires an instance to support more/less operations, - A typeclass can be implemented for more/less objects
One important point is that "general" is not necessarily "good". For example, imagine I have a monad typeclass that has the "referential transparency" law (why are you using a monad?! Well, never mind that for now.) Obviously, the IO monad cannot be validly be an instance of this typeclass. But despite only admitting instances for a subset of monads, being "less general", I think most people who've bought into Haskell agree, referentially transparent code is good code! This is the essential tension of generality and specificity: if it's too general, "anything goes", but if it's too specific, it lacks elegance.
So, there is a definitive and tangible difference between "all bottoms are recoverable" and "some bottoms are recoverable." The former corresponds to an extra law along the lines of "I can always catch exceptions." This makes reduces the number of objects the typeclass can be implemented for (or, if you may, it reduces the number of admissible implementations for the typeclass), but I would like to defend this as good, much like referential transparency is a good restriction. OK, what MonadUnrecoverableException exactly do you have in mind? I was
1. Adding `Finally ∷ Controller α → (Maybe α → Controller β) → Command (α, β)` and a `MonadFinally Controller` instance (and modifying interpreters to maintain finalizer stacks): instance MonadFinally Controller where finally' m = singleton . Finally m 2. Writing more simulators with different interruption strategies (e.g. using StdGen, or `interrupt ∷ PowerPlantState → Bool`, etc). thinking about something like (no asynchronous exceptions for now): -- ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ) -- RECOVERABLE_ZEROS = zeros recoverable /by `recover`/. -- e.g. `mzero` may not be in RECOVERABLE_ZEROS, even though it is -- recoverable by `mplus`. class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where -- let eval m = recover (Right <$> m) (return . Left) -- LAWS: -- eval (return a) = return $ Right a -- eval (abort e) = return $ Left e -- eval (m >>= f) = eval m >>= either (return . Left) (eval . f) recover ∷ μ α → (e → μ α) → μ α -- RECOVERABLE_ZEROS(μ) = ZEROS(μ) class MonadRecover e μ ⇒ MonadRecoverAll e μ | μ → e where -- EXTRA LAW: -- ∀ z ∷ μ α . (∀ f ∷ α → μ β . z >>= f = z) -- => eval z = return $ Left ERROR(z) -- No new methods. finallyDefault ∷ MonadRecoverAll e μ ⇒ μ α → (Maybe α → μ β) → μ (α, β) finallyDefault m f = do a ← m `recover` \e → f Nothing >> abort e (a, ) <$> f (Just a)
For example, what happens in these cases:
retry `recover` \e -> writeTVar t v retry retry `finally` writeTVar t v instance MonadFinally STM where finally' m f = do r ← catchSTM (Right<$> m) (return . Left) `mplus` (f Nothing>> retry) either (\e → f Nothing>> throwSTM e) (\a → (a, )<$> f (Just a)) r
~> writeTVar t v>> retry ~> retry
retry+finally may actually not be quite as useless as it seems. The finalizer could contain IORef manipulations (collecting some statistics, implementing custom retry strategies, etc).
error "foo" `mplus` return ()
It depends. In STM and Maybe it would be `error "foo"`. In some right-biased monad it could be `return ()`. But that's OK, because `mplus` is required to have `mzero` as its zero, everything (other kinds of failures; and `error "foo"` is not necessarily a failure!) on top of that is implementation dependent. The same goes for `recover`: everything on top of handling `abort` is implementation dependent.
Playing along with the recoverable/unrecoverable idea for a bit, I think I agree with your assessments for the first two cases (the second is a bit weird but I agree that it could be useful.) But I'm very worried about your refusal to give a law for the third case. In some ways, this is exactly how we got into the MonadPlus mess in the first place. I'm not sure I understand. Neither of MonadOr/MonadPlus has laws about `error`, so the behavior is implementation dependent for both `mplus` and `morelse`.
Well, in that case, recover/finally are being awfully nosy sticking their laws into non-bottom zeros. :^) `recover` should be tied to `abort` in the same manner as `mplus` is tied to `mzero`. But I admit that MonadFinally is wacky, I can't even give laws for it.
I think the distinction between recoverable and unrecoverable is reasonable, and could be the basis for an interesting typeclass. But the notion needs to be made a lot more precise. It is actually not about creating a new typeclass (what methods would it have anyway?), it is about relaxing rules for `recover`.
MonadOr and MonadPlus have functions with the same type signatures, but they are named differently to avoid confusion. I suggest something like that would be useful here. The problem with MonadPlus and MonadOr is that their respective laws are incomparable. As I already mentioned, "no unrecoverable errors" is a special case of "maybe some unrecoverable errors", so we have a nice inclusion relation. We could have a "marker" (no new methods, just laws) typeclass for monads with no unrecoverable errors. But I suspect it would be rarely used in the presence of `finally`.
I disagreed.
So, to sum things up. I want a generic (works with weird interpreters too), modular (several typeclasses) API for error handling. You want to squash all the typeclasses into one, staying as close to IO exceptions as possible and merging MonadPlus in (where applicable). Am I correct? Hopefully I showed that there are some useful monads that do not fit into this "one class to rule them all" approach.
That's close, although misses some subtleties I developed in this message.
I want to squash all the typeclasses into one, staying as close to IO exceptions as possible. This is because bottom is special, and I think it's worth giving a typeclass for handling it specially. Let's call this typeclass MonadException. So I won't be able to use `catch` with `ErrorT SomeException` or interpreters that do not handle bottoms?
MonadPlus is a fine typeclass and can be left as is. I don't think MonadException should interact with MonadPlus zeros. In fact, I don't even think IO should really be MonadPlus. What about `MaybeT IO` and STM?
I also think that unrecoverable/recoverable exceptions is a legitimate idea. I think it could get its own typeclass, let's call it MonadUnrecoverableException. I don't think any MonadException is automatically a MonadUnrecoverableException, by appealing to the laws of MonadException. I'm confused. What methods/laws would MonadUnrecoverableException contain?
I make no claim about whether or not a modular API would make sense for the unrecoverable/recoverable exception idea. Maybe it does, I haven't thought hard enough about it. However, I definitely object to such an API being used in a generic fashion, for ordinary IO as well. Then we'll need two different APIs for error handling. One for "exactly like IO" and one for "not exactly like IO". And all the common idioms (starting with the contents of `Control.Exception`) will need to be implemented twice. I just want to be able to write
-- Assuming the ConstraintKinds extension is enabled. type MonadException μ = (MonadAbort SomeException μ, ...) and get all the utilities for free.

Excerpts from Mikhail Vorozhtsov's message of Tue Jan 24 07:26:35 -0500 2012:
Sure, but note that evaluate for IO is implemented with seq# under the hood, so as long as you actually get ordering in your monad it's fairly straightforward to implement evaluate. (Remember that the ability to /catch/ an error thrown by evaluate is separate from the ability to /evaluate/ a thunk which might throw an error.) Yes, of course. The purpose of MonadUnbottom is to guarantee that `Control.Exception.throw e ∷ μ α = abort (toException e)`. The choice of a class method is somewhat arbitrary here (one could go with 'α → μ (Either SomeException α)` or with no methods at all).
I want to highlight the strangeness of "exception-like" monads that don't have a MonadUnbottom instance (for concreteness, let's assume that there are no methods associated with it. What would you expect this code to do? catch (throw (UserError "Foo")) (putStrLn "caught") >> putStrLn "ignored result" If we don't have ordering, the monad is permitted to entirely ignore the thrown exception. (In fact, you can see this with the lazy state monad, so long as you don't force the state value.) Just like in lazy IO, exceptions can move around to places you don't expect them.
I think your simulation is incomplete. Let's make this concrete: suppose I'm running one of these programs, and I type ^C (because I want to stop the program and do something else.) In normal 'run' operation, I would expect the program to run some cleanup operations and then exit. But there's no way for the simulation to do that! We've lost something here. I'm not sure I would want to go ^C on a power plant controlling software, but OK. We could accommodate external interruptions by:
1. Adding `Finally ∷ Controller α → (Maybe α → Controller β) → Command (α, β)` and a `MonadFinally Controller` instance (and modifying interpreters to maintain finalizer stacks):
instance MonadFinally Controller where finally' m = singleton . Finally m
2. Writing more simulators with different interruption strategies (e.g. using StdGen, or `interrupt ∷ PowerPlantState → Bool`, etc).
I think this scheme would work, because your interpreter slices up the actions finely enough so that the interpreter always gets back control before some action happens.
Stepping back for a moment, I think the conversation here would be helped if we dropped loaded terms like "general" and "precise" and talked about concrete properties:
- A typeclass has more/less laws (equivalently, the typeclass constrains what else an object can do, outside of an instance), - A typeclass requires an instance to support more/less operations, - A typeclass can be implemented for more/less objects
One important point is that "general" is not necessarily "good". For example, imagine I have a monad typeclass that has the "referential transparency" law (why are you using a monad?! Well, never mind that for now.) Obviously, the IO monad cannot be validly be an instance of this typeclass. But despite only admitting instances for a subset of monads, being "less general", I think most people who've bought into Haskell agree, referentially transparent code is good code! This is the essential tension of generality and specificity: if it's too general, "anything goes", but if it's too specific, it lacks elegance.
So, there is a definitive and tangible difference between "all bottoms are recoverable" and "some bottoms are recoverable." The former corresponds to an extra law along the lines of "I can always catch exceptions." This makes reduces the number of objects the typeclass can be implemented for (or, if you may, it reduces the number of admissible implementations for the typeclass), but I would like to defend this as good, much like referential transparency is a good restriction. OK, what MonadUnrecoverableException exactly do you have in mind?
I don't know, I've never needed one! :^)
I was thinking about something like (no asynchronous exceptions for now):
-- ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ)
Do you have a motivation behind this division? Are there non-finalizable but recoverable zeros? Why can't I use aborts to throw non-recoverable or non-finalizable zeros? Maybe there should be a hierarchy of recoverability, since I might have a top-level controller which can "kill and spawn" processes? Maybe we actually want a lattice structure? Someone has put a term for this problem before: it is an "embarassment of riches". There is so much latitude of choice here that it's hard to know what the right thing to do is.
-- RECOVERABLE_ZEROS = zeros recoverable /by `recover`/. -- e.g. `mzero` may not be in RECOVERABLE_ZEROS, even though it is -- recoverable by `mplus`. class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where -- let eval m = recover (Right <$> m) (return . Left) -- LAWS: -- eval (return a) = return $ Right a -- eval (abort e) = return $ Left e -- eval (m >>= f) = eval m >>= either (return . Left) (eval . f) recover ∷ μ α → (e → μ α) → μ α
-- RECOVERABLE_ZEROS(μ) = ZEROS(μ) class MonadRecover e μ ⇒ MonadRecoverAll e μ | μ → e where -- EXTRA LAW: -- ∀ z ∷ μ α . (∀ f ∷ α → μ β . z >>= f = z) -- => eval z = return $ Left ERROR(z) -- No new methods.
finallyDefault ∷ MonadRecoverAll e μ ⇒ μ α → (Maybe α → μ β) → μ (α, β) finallyDefault m f = do a ← m `recover` \e → f Nothing >> abort e (a, ) <$> f (Just a)
I must admit, I have some difficulty seeing what is going on here. Does the instance of a type class indicates there /exist/ zeros of some type (zeros that would otherwise be untouchable?)
Well, in that case, recover/finally are being awfully nosy sticking their laws into non-bottom zeros. :^) `recover` should be tied to `abort` in the same manner as `mplus` is tied to `mzero`. But I admit that MonadFinally is wacky, I can't even give laws for it.
It would be cool if we could figure this out.
I want to squash all the typeclasses into one, staying as close to IO exceptions as possible. This is because bottom is special, and I think it's worth giving a typeclass for handling it specially. Let's call this typeclass MonadException. So I won't be able to use `catch` with `ErrorT SomeException` or interpreters that do not handle bottoms?
If we can unify the semantics in a sensible way, I have no objection (the choice of exceptions or pure values is merely an implementation detail.) But it's not obvious that this is the case, especially when you want to vary the semantics in interesting ways. Some other points here: - Why not use exceptions? I've always thought ErrorT was not such a good idea, if you are in an IO based monad stack. If you're in a left-associated stack of binds, exceptions are more efficient. (The obvious answer is: you want the semantics to be different. But then...) - If the semantics are different, OK, now you need to write two catch functions, but you are handling each type of exception separately already, right?
MonadPlus is a fine typeclass and can be left as is. I don't think MonadException should interact with MonadPlus zeros. In fact, I don't even think IO should really be MonadPlus. What about `MaybeT IO` and STM?
IO has effects, so if I have mplus (effect >> mzero) a, this equals effect >> a, not a. Same applies for MaybeT IO. I have to be very careful to preserve the monoid property. STM, on the other hand, by definition has the ability to rollback. This is what makes it so nice!
I also think that unrecoverable/recoverable exceptions is a legitimate idea. I think it could get its own typeclass, let's call it MonadUnrecoverableException. I don't think any MonadException is automatically a MonadUnrecoverableException, by appealing to the laws of MonadException. I'm confused. What methods/laws would MonadUnrecoverableException contain?
They'd be very simple! Unrecoverable exceptions always cause program execution to "get stuck." There are no contexts (like catch) which affect them.
I make no claim about whether or not a modular API would make sense for the unrecoverable/recoverable exception idea. Maybe it does, I haven't thought hard enough about it. However, I definitely object to such an API being used in a generic fashion, for ordinary IO as well. Then we'll need two different APIs for error handling. One for "exactly like IO" and one for "not exactly like IO". And all the common idioms (starting with the contents of `Control.Exception`) will need to be implemented twice. I just want to be able to write
-- Assuming the ConstraintKinds extension is enabled. type MonadException μ = (MonadAbort SomeException μ, ...)
and get all the utilities for free.
Yes, I think for some this is the crux of the issue. Indeed, it is why monad-control is so appealing, it dangles in front of us the hope that we do, in fact, only need one API. But, if some functions in fact do need to be different between the two cases, there's not much we can do, is there? Cheers, Edward

On 01/24/2012 10:56 PM, Edward Z. Yang wrote:
Excerpts from Mikhail Vorozhtsov's message of Tue Jan 24 07:26:35 -0500 2012:
Sure, but note that evaluate for IO is implemented with seq# under the hood, so as long as you actually get ordering in your monad it's fairly straightforward to implement evaluate. (Remember that the ability to /catch/ an error thrown by evaluate is separate from the ability to /evaluate/ a thunk which might throw an error.) Yes, of course. The purpose of MonadUnbottom is to guarantee that `Control.Exception.throw e ∷ μ α = abort (toException e)`. The choice of a class method is somewhat arbitrary here (one could go with 'α → μ (Either SomeException α)` or with no methods at all).
I want to highlight the strangeness of "exception-like" monads that don't have a MonadUnbottom instance (for concreteness, let's assume that there are no methods associated with it. What would you expect this code to do?
catch (throw (UserError "Foo")) (putStrLn "caught")>> putStrLn "ignored result"
If we don't have ordering, the monad is permitted to entirely ignore the thrown exception. (In fact, you can see this with the lazy state monad, so long as you don't force the state value.) Just like in lazy IO, exceptions can move around to places you don't expect them. You are trying to make bottoms a new null pointers. Sometimes it just doesn't worth the effort (or depends on the interpreter you use). I want to have the option to say: sorry, in this particular case (monad) I don't distinguish `error` from non-termination, so `catch ⊥ h = ⊥`.
[snip]
Stepping back for a moment, I think the conversation here would be helped if we dropped loaded terms like "general" and "precise" and talked about concrete properties:
- A typeclass has more/less laws (equivalently, the typeclass constrains what else an object can do, outside of an instance), - A typeclass requires an instance to support more/less operations, - A typeclass can be implemented for more/less objects
One important point is that "general" is not necessarily "good". For example, imagine I have a monad typeclass that has the "referential transparency" law (why are you using a monad?! Well, never mind that for now.) Obviously, the IO monad cannot be validly be an instance of this typeclass. But despite only admitting instances for a subset of monads, being "less general", I think most people who've bought into Haskell agree, referentially transparent code is good code! This is the essential tension of generality and specificity: if it's too general, "anything goes", but if it's too specific, it lacks elegance.
So, there is a definitive and tangible difference between "all bottoms are recoverable" and "some bottoms are recoverable." The former corresponds to an extra law along the lines of "I can always catch exceptions." This makes reduces the number of objects the typeclass can be implemented for (or, if you may, it reduces the number of admissible implementations for the typeclass), but I would like to defend this as good, much like referential transparency is a good restriction. OK, what MonadUnrecoverableException exactly do you have in mind?
I don't know, I've never needed one! :^)
I was thinking about something like (no asynchronous exceptions for now):
-- ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ)
Do you have a motivation behind this division? Are there non-finalizable but recoverable zeros? Why can't I use aborts to throw non-recoverable or non-finalizable zeros? Maybe there should be a hierarchy of recoverability, since I might have a top-level controller which can "kill and spawn" processes? Maybe we actually want a lattice structure? I think it is one of the simplest layouts one can some up with. I'll try to explain the motivation behind each inclusion.
ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ) Why are they not equal? After all we can always write `recover weird $ \e → abort e`, right? But zeros from `RECOVERABLE_ZEROES \ ABORTS` may have additional effects. For example, recoverable interruptions could permanently disable blocking operations (you can close a socket but you can't read/write from it). Why the inclusion is not the other way around? Well, I find the possibility of `abort e1` and `abort e2` having different semantics (vs `recover` or `finally`) terrifying. If you can throw unrecoverable exceptions, you should have a different function for that. RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ) If a zero is recoverable, we can always "finalize" it (by catch-and-rethrow). FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ) This one is pretty obvious. One example of non-finalizable zeros is bottoms in a non-MonadUnbottom monad (e.g. my X monad). Another would be `System.Posix.Process.exitImmediately`.
Someone has put a term for this problem before: it is an "embarassment of riches". There is so much latitude of choice here that it's hard to know what the right thing to do is.
-- RECOVERABLE_ZEROS = zeros recoverable /by `recover`/. -- e.g. `mzero` may not be in RECOVERABLE_ZEROS, even though it is -- recoverable by `mplus`. class MonadAbort e μ ⇒ MonadRecover e μ | μ → e where -- let eval m = recover (Right<$> m) (return . Left) -- LAWS: -- eval (return a) = return $ Right a -- eval (abort e) = return $ Left e -- eval (m>>= f) = eval m>>= either (return . Left) (eval . f) recover ∷ μ α → (e → μ α) → μ α
-- RECOVERABLE_ZEROS(μ) = ZEROS(μ) class MonadRecover e μ ⇒ MonadRecoverAll e μ | μ → e where -- EXTRA LAW: -- ∀ z ∷ μ α . (∀ f ∷ α → μ β . z>>= f = z) -- => eval z = return $ Left ERROR(z) -- No new methods.
finallyDefault ∷ MonadRecoverAll e μ ⇒ μ α → (Maybe α → μ β) → μ (α, β) finallyDefault m f = do a ← m `recover` \e → f Nothing>> abort e (a, )<$> f (Just a)
I must admit, I have some difficulty seeing what is going on here. Does the instance of a type class indicates there /exist/ zeros of some type (zeros that would otherwise be untouchable?) A MonadRecoverAll instance indicates that /all/ (can be relaxed to "all finalizable") left zeros are recoverable by `recover`. The implementation dependent meta-function `ERROR` maps zeros to the error type `e` (but `ERROR (abort e) = e` always holds).
Well, in that case, recover/finally are being awfully nosy sticking their laws into non-bottom zeros. :^) `recover` should be tied to `abort` in the same manner as `mplus` is tied to `mzero`. But I admit that MonadFinally is wacky, I can't even give laws for it.
It would be cool if we could figure this out. The "no error" and "recoverable error" scenarios are simple. But when an unrecoverable error occurs, the finalizer may well be executed in a different environment altogether (e.g. something like C `atexit`).
I want to squash all the typeclasses into one, staying as close to IO exceptions as possible. This is because bottom is special, and I think it's worth giving a typeclass for handling it specially. Let's call this typeclass MonadException. So I won't be able to use `catch` with `ErrorT SomeException` or interpreters that do not handle bottoms?
If we can unify the semantics in a sensible way, I have no objection (the choice of exceptions or pure values is merely an implementation detail.) But it's not obvious that this is the case, especially when you want to vary the semantics in interesting ways. That's why I'm trying to make things like MonadUnbottom optional.
Some other points here:
- Why not use exceptions? I've always thought ErrorT was not such a good idea, if you are in an IO based monad stack. If you're in a left-associated stack of binds, exceptions are more efficient. (The obvious answer is: you want the semantics to be different. But then...)
- If the semantics are different, OK, now you need to write two catch functions, but you are handling each type of exception separately already, right? You have to handle IO exceptions only if you "leak" them from your implementation. For transformer stacks it is always so, for some interpreters it is not. The `ErrorT e IO` problem is related to another can of worms: operation lifting through transformers.
MonadPlus is a fine typeclass and can be left as is. I don't think MonadException should interact with MonadPlus zeros. In fact, I don't even think IO should really be MonadPlus. What about `MaybeT IO` and STM?
IO has effects, so if I have mplus (effect>> mzero) a, this equals effect>> a, not a. Same applies for MaybeT IO. I have to be very careful to preserve the monoid property. STM, on the other hand, by definition has the ability to rollback. This is what makes it so nice! Should STM/`MaybeT IO` have MonadException instances? How `catch` and `finally` will interact with `retry`/`MaybeT (return Nothing)`?
I also think that unrecoverable/recoverable exceptions is a legitimate idea. I think it could get its own typeclass, let's call it MonadUnrecoverableException. I don't think any MonadException is automatically a MonadUnrecoverableException, by appealing to the laws of MonadException. I'm confused. What methods/laws would MonadUnrecoverableException contain?
They'd be very simple! Unrecoverable exceptions always cause program execution to "get stuck." There are no contexts (like catch) which affect them. So you are suggesting something like
class MonadUnrecoverableException μ where throwUnrecoverable ∷ Exception e ⇒ e → μ α But I'm not interested in throwing such exceptions! It may not even be possible (allowed) to do that from within the monad itself (e.g. external interruptions in my X monad). All I care about is that unrecoverable zeros (not necessarily tied with Exception) exist, which means that I cannot implement `finally` on top of `catch`.
I make no claim about whether or not a modular API would make sense for the unrecoverable/recoverable exception idea. Maybe it does, I haven't thought hard enough about it. However, I definitely object to such an API being used in a generic fashion, for ordinary IO as well. Then we'll need two different APIs for error handling. One for "exactly like IO" and one for "not exactly like IO". And all the common idioms (starting with the contents of `Control.Exception`) will need to be implemented twice. I just want to be able to write
-- Assuming the ConstraintKinds extension is enabled. type MonadException μ = (MonadAbort SomeException μ, ...)
and get all the utilities for free.
Yes, I think for some this is the crux of the issue. Indeed, it is why monad-control is so appealing, it dangles in front of us the hope that we do, in fact, only need one API.
But, if some functions in fact do need to be different between the two cases, there's not much we can do, is there? Yes, but on the other hand I don't want to reimplement ones that are the same. I want to have a modular solution precisely because it allows both sharing and extensibility.

Excerpts from Mikhail Vorozhtsov's message of Sun Jan 29 05:34:17 -0500 2012:
You are trying to make bottoms a new null pointers. Sometimes it just doesn't worth the effort (or depends on the interpreter you use). I want to have the option to say: sorry, in this particular case (monad) I don't distinguish `error` from non-termination, so `catch ⊥ h = ⊥`.
This is a longstanding complaint that Robert Harper has with lazy languages (the "paucity of types" complaint.) http://existentialtype.wordpress.com/2011/04/24/the-real-point-of-laziness/ There's not much I can say here, except that: - There really is no difference: GHC can sometimes detect nontermination and will throw an exception (for example, the deadlocked exception), and - The user will sometimes act as a termination checker, and ^C a program that is taking too long.
I think it is one of the simplest layouts one can some up with. I'll try to explain the motivation behind each inclusion.
ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ)
I'm sorry, I cannot understand the discussion below because you haven't defined precisely what ABORTS means. (See also below; I think it's time to write something up.)
Why are they not equal? After all we can always write `recover weird $ \e → abort e`, right? But zeros from `RECOVERABLE_ZEROES \ ABORTS` may have additional effects. For example, recoverable interruptions could permanently disable blocking operations (you can close a socket but you can't read/write from it). Why the inclusion is not the other way around? Well, I find the possibility of `abort e1` and `abort e2` having different semantics (vs `recover` or `finally`) terrifying. If you can throw unrecoverable exceptions, you should have a different function for that.
RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ)
If a zero is recoverable, we can always "finalize" it (by catch-and-rethrow).
FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ)
This one is pretty obvious. One example of non-finalizable zeros is bottoms in a non-MonadUnbottom monad (e.g. my X monad). Another would be `System.Posix.Process.exitImmediately`.
Ugh, don't talk to me about the exit() syscall ;-)
If we can unify the semantics in a sensible way, I have no objection (the choice of exceptions or pure values is merely an implementation detail.) But it's not obvious that this is the case, especially when you want to vary the semantics in interesting ways. That's why I'm trying to make things like MonadUnbottom optional.
Well, I haven't actually checked if this works or not!
- If the semantics are different, OK, now you need to write two catch functions, but you are handling each type of exception separately already, right?
You have to handle IO exceptions only if you "leak" them from your implementation. For transformer stacks it is always so, for some interpreters it is not. The `ErrorT e IO` problem is related to another can of worms: operation lifting through transformers.
OK.
IO has effects, so if I have mplus (effect>> mzero) a, this equals effect>> a, not a. Same applies for MaybeT IO. I have to be very careful to preserve the monoid property. STM, on the other hand, by definition has the ability to rollback. This is what makes it so nice! Should STM/`MaybeT IO` have MonadException instances? How `catch` and `finally` will interact with `retry`/`MaybeT (return Nothing)`?
I don't see why not, as long as they obey the semantics. But someone should do the legwork here.
I also think that unrecoverable/recoverable exceptions is a legitimate idea. I think it could get its own typeclass, let's call it MonadUnrecoverableException. I don't think any MonadException is automatically a MonadUnrecoverableException, by appealing to the laws of MonadException. I'm confused. What methods/laws would MonadUnrecoverableException contain?
They'd be very simple! Unrecoverable exceptions always cause program execution to "get stuck." There are no contexts (like catch) which affect them. So you are suggesting something like
class MonadUnrecoverableException μ where throwUnrecoverable ∷ Exception e ⇒ e → μ α
But I'm not interested in throwing such exceptions! It may not even be possible (allowed) to do that from within the monad itself (e.g. external interruptions in my X monad). All I care about is that unrecoverable zeros (not necessarily tied with Exception) exist, which means that I cannot implement `finally` on top of `catch`.
Yes, but in that case, your semantics would have to change to add a case for finally; you'd need to unwind the stack, etc etc. You're talking about finalizable, but unrecoverable exceptions.
Yes, I think for some this is the crux of the issue. Indeed, it is why monad-control is so appealing, it dangles in front of us the hope that we do, in fact, only need one API.
But, if some functions in fact do need to be different between the two cases, there's not much we can do, is there? Yes, but on the other hand I don't want to reimplement ones that are the same. I want to have a modular solution precisely because it allows both sharing and extensibility.
The cardinal sin of object oriented programming is building abstractions in deference of code reuse, not the other way around. Stepping back a moment, I think the most useful step for you is to write up a description of your system, incorporating the information from this discussion, and once we have the actual article in hand we can iterate from there. Edward

On 01/29/2012 11:55 PM, Edward Z. Yang wrote:
Excerpts from Mikhail Vorozhtsov's message of Sun Jan 29 05:34:17 -0500 2012: [snip]
I think it is one of the simplest layouts one can some up with. I'll try to explain the motivation behind each inclusion.
ABORTS(μ) ⊆ RECOVERABLE_ZEROS(μ)
I'm sorry, I cannot understand the discussion below because you haven't defined precisely what ABORTS means. (See also below; I think it's time to write something up.) ABORTS(μ) = { abort e | e ∷ e }
Why are they not equal? After all we can always write `recover weird $ \e → abort e`, right? But zeros from `RECOVERABLE_ZEROES \ ABORTS` may have additional effects. For example, recoverable interruptions could permanently disable blocking operations (you can close a socket but you can't read/write from it). Why the inclusion is not the other way around? Well, I find the possibility of `abort e1` and `abort e2` having different semantics (vs `recover` or `finally`) terrifying. If you can throw unrecoverable exceptions, you should have a different function for that.
RECOVERABLE_ZEROS(μ) ⊆ FINALIZABLE_ZEROS(μ)
If a zero is recoverable, we can always "finalize" it (by catch-and-rethrow).
FINALIZABLE_ZEROS(μ) ⊆ ZEROS(μ)
This one is pretty obvious. One example of non-finalizable zeros is bottoms in a non-MonadUnbottom monad (e.g. my X monad). Another would be `System.Posix.Process.exitImmediately`.
Ugh, don't talk to me about the exit() syscall ;-)
[snip]
Yes, I think for some this is the crux of the issue. Indeed, it is why monad-control is so appealing, it dangles in front of us the hope that we do, in fact, only need one API.
But, if some functions in fact do need to be different between the two cases, there's not much we can do, is there? Yes, but on the other hand I don't want to reimplement ones that are the same. I want to have a modular solution precisely because it allows both sharing and extensibility.
The cardinal sin of object oriented programming is building abstractions in deference of code reuse, not the other way around.
Stepping back a moment, I think the most useful step for you is to write up a description of your system, incorporating the information from this discussion, and once we have the actual article in hand we can iterate from there. I'll probably release an updated (and documented) version of monad-abort-fd when I have enough time. At the moment I'm just overloaded with work.
participants (4)
-
Bas van Dijk
-
Brandon Allbery
-
Edward Z. Yang
-
Mikhail Vorozhtsov