
Станислав Черничкин wrote:
Just look at this beautiful mutex implementation https://github.com/ovotech/fs2-kafka/blob/master/src/main/scala/fs2/kafka/in...
As far as I can see, this only works because Java/Scala don't have (or at least, very strongly discourage) asynchronous exceptions. Here's my attempt to translate the code into Haskell: import Control.Concurrent.MVar -- should be an IVar import Control.Concurrent import Control.Exception (bracket) import Data.IORef type Mutex = IORef (MVar ()) newMutex :: IO Mutex newMutex = do next <- newMVar () newIORef next withMutex :: Mutex -> IO () -> IO () withMutex m act = do next <- newEmptyMVar bracket (atomicModifyIORef m (\curr -> (next, curr))) -- atomic swap (\_ -> putMVar next ()) $ \curr -> do readMVar curr -- readMVar is no longer a combination of takeMVar/putMVar -- since base 4.7, so we can faithfully emulate an IVar act Now if the `readMVar` is interrupted by an asynchronous exception, subsequent threads will be woken up, violating the mutual exclusion property. For example: mkThread lock nm = do tid <- forkIO $ withMutex lock $ do putStrLn $ unwords ["thread", nm, "running"] threadDelay 200000 putStrLn $ unwords ["thread", nm, "stopping"] yield return tid main = do lock <- newMutex threadA <- mkThread lock "A" threadB <- mkThread lock "B" threadC <- mkThread lock "C" killThread threadB threadDelay 1000000 Output: thread A running thread C running thread C stopping thread A stopping Oops. This is awkward to fix. Basically, when abandoning the lock before it has been released by the previous owner, we need a new thread to wait for the 'current' IVar and notify the 'next' one, since the current thread is being interrupted. So `withMutex` will end up with code like this: withMutex :: Mutex -> IO () -> IO () withMutex m act = do next <- newEmptyMVar bracket (atomicModifyIORef m (\curr -> (next, curr))) (cleanup next) $ \curr -> do readMVar curr act where cleanup :: MVar () -> MVar () -> IO () cleanup next curr = do b <- tryReadMVar next case b of Just _ -> putMVar next () Nothing -> void $ forkIO $ do readMVar curr putMVar next () This loses a lot of elegance. On the low-level implementation side, both MVars and IVars need to maintain a list of waiting threads; both require logic to wake up threads (IVars will wake all threads; when putting a value, MVars will wake up threads reading the MVar, up to the first thread (if any) that actually takes the MVar value). I believe MVars are not much more difficult to implement than IVars. (This assumes a global memory; IVars may be simpler in a distributed setting.) For users, MVars are dangerous if used without restrictions, but we have easy to understand patterns, for example for using an MVar as a mutex (newMVar, withMVar), or as an IVar (newEmptyMVar, putMVar, readMVar). To summarize, IVars may be harder to misuse, but MVars provide tangible benefits as a primitive, especially in the presence of asynchronous exceptions. Cheers, Bertram P.S.:
1. [MVars are] complex. Each MVar has 2 state transitions, each may block.
It seems worth noting that the IVar state transition also blocks.
2. [MVars do not] play well in presence of asynchronous exceptions.
I can't help smirking about this claim.