
On Fri, May 7, 2010 at 1:02 AM, Bas van Dijk
On Thu, May 6, 2010 at 7:48 PM, Bas van Dijk
wrote: On Thu, May 6, 2010 at 3:24 PM, Michael Snoyman
wrote: On Thu, May 6, 2010 at 9:13 AM, Bryan O'Sullivan
wrote:
On Wed, May 5, 2010 at 10:51 PM, Michael Snoyman
wrote:
* When a connection is released, is goes to the end of the pool, so connections get used evenly (not sure if this actually matters in
On Thu, May 6, 2010 at 11:54 PM, Bas van Dijk
wrote: practice). In practice, you're better off letting idle connections stay that way, because then your DB server can close connections and free up
resources. In
other words, when you're done with a connection, put it at the front of the reuse queue, not the back. You'll also want to handle the possibility that a connection that you grab from the pool has been closed by the server. Many connection pooling implementations I've seen get this wrong in subtle or expensive ways.
Thanks for the feedback. I've gone ahead and implemented a simple resource pool module. Since I need it to work with monad transformer stacks, I've built it on top of MonadCatchIO-transformers. I've put the code up in a gist on github[1]. I would appreciate if anyone could review this, especially to make sure the exception handling code is correct. block and unblock in particular concern me. Thanks, Michael [1] http://gist.github.com/392078
I also have a suggestion for your design. (Note however that I don't have much experience with resource pools.)
In your current design a Pool has a fixed maximum number of opened resources. I can imagine situations where the maximum number of opened resources can change dynamically. For example due to plugging in (or out) a new blade server at run-time which will increase (or decrease) the maximum number of resources that can be handled.
So what about changing:
createPool :: IO a -> Int -> IO (Pool a) to: createPool :: IO (Maybe a) -> IO (Pool a)
so, instead of statically storing the maximum number of opened resources (Int), the resource creation function will decide itself when it has created enough (Maybe a).
Regards,
Bas
How about something like this:
--------------------------------------------------------------------------------
{-# LANGUAGE NoImplicitPrelude #-} -- (I like to be explicit)
module Pool (Pool, new, withPool) where
import Data.Function ( ($), (.) ) import Data.Maybe ( Maybe(Nothing,Just), maybe ) import Data.Functor ( (<$>) ) import Control.Monad ( return, (>>=), (>>), (=<<), fail, join, liftM ) import Control.Monad.STM ( atomically ) import Control.Concurrent.STM.TVar ( TVar, newTVarIO, readTVar, writeTVar ) import Control.Monad.CatchIO ( MonadCatchIO, block, finally ) import Control.Monad.IO.Class ( liftIO )
newtype Pool r = Pool (TVar [r])
new :: MonadCatchIO m => m (Pool r) new = liftIO $ Pool <$> newTVarIO []
withPool :: MonadCatchIO m => Pool r -> m (Maybe r) -> (r -> m a) -> m (Maybe a) withPool (Pool tv) mk f = block $ join $ liftIO $ atomically $ do rrs <- readTVar tv case rrs of [] -> return $ mk >>= maybe (return Nothing) with r:rs -> writeTVar tv rs >> return (with r) where with r = liftM Just (f r) `finally` liftIO (atomically $ writeTVar tv . (r:) =<< readTVar tv)
--------------------------------------------------------------------------------
Note that I don't store the resource creation action (m (Maybe r)) inside the pool. It's just passed as an argument to withPool.
Regards,
Bas
Note that it's probably better to pass the resource creation action as the first argument to withPool:
withPool :: MonadCatchIO m => m (Maybe r) -> Pool r -> (r -> m a) -> m (Maybe a)
This way it's easier to create specialized withPool functions by partially applying a specific resource creation action to withPool as in:
withDBConsPool :: MonadCatchIO m => Pool DBCon -> (DBCon -> m a) -> m (Maybe a) withDBConsPool = withPool connectWithDB
Regards,
Bas
Bas, Thank you for all the very thorough comments. If I'm understanding correctly, there are two categories of suggestion: 1) Make the resource exhaustion mechanism more extensible. 2) Avoid "wormholes" Please tell me if I've missed something. Regarding (1), I think your approach is definitely better for complex pools; however, for the usually case, I think it would present a more difficult API for users. I could definitely imagine wrapping an easier-to-use interface around your final example. Regarding (2), I was not aware of it, thank you for updating me on the issue. So, here's my idea of how to wrap your Pool module to provide a simple maximum-resource-count exhaustion algorithm. {-# LANGUAGE PackageImports #-} module EasyPool ( EasyPool , withEasyPool , newEasyPool ) where import Pool import Control.Monad.STM import Control.Concurrent.STM.TVar import Control.Monad.IO.Class import "MonadCatchIO-transformers" Control.Monad.CatchIO data EasyPool r m = EasyPool { epPool :: Pool r , epMake :: m (Maybe r) } withEasyPool :: MonadCatchIO m => EasyPool r m -> (r -> m a) -> m (Maybe a) withEasyPool (EasyPool pool mk) = withPool pool mk newEasyPool :: MonadCatchIO m => Int -> m r -> m (EasyPool r m) newEasyPool count mk = do pool <- new texist <- liftIO $ newTVarIO 0 return $ EasyPool pool $ mk' texist where mk' texist = do exist <- liftIO $ atomically $ readTVar texist if exist >= count then return Nothing else do r <- mk liftIO $ atomically $ do exist <- readTVar texist if exist >= count then return Nothing else return $ Just r