Optimizing a high-traffic network architecture

Folks, In my current architecture I launch a two threads per socket where the socket reader places results in a TMVar and the socket writer takes input from a TChan. I also have the worker thread the does the bulk of packet processing and a timer thread. The time thread sleeps for a few minutes and exits after posting a timeout event if it hasn't been killed before. My goal is to launch poker 2,000 bots that join the server "lobby" and sit there sending small keep-alive packets every few minutes. The ultimate goal is for 4,000 bots to be playing but I'm taking it one step at a time. This is Mac OSX Tiger with a couple of header files modified to allow FD_SETSIZE of 10240. This is the maximum allowed by 'ulimit -n'. I'm also running ghc 6.4.1, compiled after FD_SETSIZE has been increased. I can get to 2k bots without any trouble if I use a keep-alive timeout of 9 minutes. Memory usage with 2k bots is 161Mb of physical memory and 262Mb VM. CPU usage 20-40%. Memory usage is constant once all bots have been launched. With a 1 minute keep-alive timeout system is starting to get stressed almost right away. There's verbose logging going on and almost every event/packet sent and received is traced. The extra logging of the timeout events probably adds to the stress and so, I assume, do the extra packets. New bots are being launched very slowly even with just 200 bots already running. Based on the above, would you have any suggestions for an improved architecture? I will try 1) disabling logging alltogether and 2) increase thread stack size to 3k (+RTS -k3k) as per Simon Marlow's suggestion. As per simon if a thread stack space is between 2k and 4k then each thread gets its own memory block (right Simon?) and threads are not GCd then. I'm a bit concerned about trippling my memory use with -k3k, though. I'm not sure if switching to a continuations-based framework will help me. Has anyone tried this? Thanks, Joel -- http://wagerlabs.com/

Hello Joel, Wednesday, December 14, 2005, 7:55:36 PM, you wrote: JR> In my current architecture I launch a two threads per socket where JR> the socket reader places results in a TMVar and the socket writer JR> takes input from a TChan. as i already said, you can write to socket directly in your worker thread JR> I also have the worker thread the does the JR> bulk of packet processing and a timer thread. The time thread sleeps JR> for a few minutes and exits after posting a timeout event if it JR> hasn't been killed before. you can use just one timeouts thread for all your bots. if this timeout is constant across program run, then this thread will be very simple - just: 1) read from Chan (yes, it is the case where using of Chan wll be appropriate! ;) 2) wait until 9 or so minutes from the time when this message was sent 3) send kill signal to the thread mentioned in message so, you will had only 2 threads. you can then try to play with conbinating socket reading and TMVar reading in one thread (btw, try to replace TMVar with MVar - may be, it will be better?). or, you can try to create one sockets reading thread, which will service all sockets. may be, this can be somewhat done with help of select() system call? it is a more "right way", but i don't know how this can be accomplished -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Dec 14, 2005, at 6:06 PM, Bulat Ziganshin wrote:
as i already said, you can write to socket directly in your worker thread
True. 1 less thread to deal with... multiplied by 4,000.
you can use just one timeouts thread for all your bots. if this timeout is constant across program run, then this thread will be very simple - just:
Well, the bots may take a couple of hours to get on board. I don't think using one thread with a constant timeout is appropriate. This is also a keep-alive timeout, meaning that the bot sends a ping to server whenever the timer is fired. I figure I can have a single timer thread and a timer map keyed on ClockTime. I would try to get the min. key from the map every few seconds, compare it to clock time, fire of the event as needed, remove the timer and repeat. This way I will have a single timer thread but as many timers as I need. Thanks, Joel -- http://wagerlabs.com/

On Wed, Dec 14, 2005 at 07:11:15PM +0000, Joel Reymont wrote:
I figure I can have a single timer thread and a timer map keyed on ClockTime. I would try to get the min. key from the map every few seconds, compare it to clock time, fire of the event as needed, remove the timer and repeat.
You don't have to check "every few seconds". You can determine exactly how much you have to sleep - just check the timeout/event with the lowest ClockTime. Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland

On Dec 14, 2005, at 7:48 PM, Tomasz Zielonka wrote:
You don't have to check "every few seconds". You can determine exactly how much you have to sleep - just check the timeout/event with the lowest ClockTime.
Right, thanks for the tip! I would need to way a predefined amount of time when the map is empty, though. -- http://wagerlabs.com/

Hello Tomasz, Wednesday, December 14, 2005, 10:48:43 PM, you wrote: TZ> You don't have to check "every few seconds". You can determine TZ> exactly how much you have to sleep - just check the timeout/event with TZ> the lowest ClockTime. this scenario don't count that we can receive new request while sleeping and if this thread services different waiting periods, the new message may require more earlier answer TZ> On Wed, Dec 14, 2005 at 07:11:15PM +0000, Joel Reymont wrote:
I figure I can have a single timer thread and a timer map keyed on ClockTime. I would try to get the min. key from the map every few seconds, compare it to clock time, fire of the event as needed, remove the timer and repeat.
i repeat my thought - if you have one or several fixed waiting periods (say, 1 sec, 3 sec and 1 minute), then you don't need even to sort requests - just use one waking thread for each waiting period and requests will be arrive already sorted. in this way, you can really sleep as Tomasz suggests Wednesday, December 14, 2005, 11:04:38 PM, you wrote: JR> Right, thanks for the tip! I would need to way a predefined amount of JR> time when the map is empty, though. no. you just read next message from the Chan (but don't use MVar here! ;) -- Best regards, Bulat mailto:bulatz@HotPOP.com

Bulat, On Dec 14, 2005, at 9:00 PM, Bulat Ziganshin wrote:
TZ> You don't have to check "every few seconds". You can determine TZ> exactly how much you have to sleep - just check the timeout/ event with TZ> the lowest ClockTime.
this scenario don't count that we can receive new request while sleeping and if this thread services different waiting periods, the new message may require more earlier answer
The scenario above does account for the situation that you are describing. We will always retrieve the minimum key and will fire the timer as long as it has expired. My timers don't need to be precise so this works for me. checkTimers :: IO () checkTimers = do t <- readMVar timers -- takes it and puts it back case M.size t of -- no timers 0 -> threadDelay timeout -- some timers _ -> do let (key@(Timer time _), io) = M.findMin t TOD now _ <- getClockTime if (time <= now) then do stopTimer key try $ io -- don't think we care return () else threadDelay timeout checkTimers
i repeat my thought - if you have one or several fixed waiting periods (say, 1 sec, 3 sec and 1 minute), then you don't need even to sort requests - just use one waking thread for each waiting period and requests will be arrive already sorted. in this way, you can really sleep as Tomasz suggests
I do not have several fixed waiting periods, they are determined by the user. Joel -- http://wagerlabs.com/

Hello Joel, Friday, December 16, 2005, 3:22:46 AM, you wrote:
TZ> You don't have to check "every few seconds". You can determine TZ> exactly how much you have to sleep - just check the timeout/ event with TZ> the lowest ClockTime.
JR> The scenario above does account for the situation that you are JR> describing. to be exact - Tomasz's variant don't work proper in this situation, but your code (which is not use this technique) is ok
i repeat my thought - if you have one or several fixed waiting periods (say, 1 sec, 3 sec and 1 minute), then you don't need even to sort requests - just use one waking thread for each waiting period and requests will be arrive already sorted. in this way, you can really sleep as Tomasz suggests
JR> I do not have several fixed waiting periods, they are determined by JR> the user. by the user of library? by the poker player? what you exactly mean? -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Dec 16, 2005, at 1:41 PM, Bulat Ziganshin wrote:
JR> I do not have several fixed waiting periods, they are determined by JR> the user.
by the user of library? by the poker player? what you exactly mean?
By the user of the library. Timers are used imprecisely, to send a timeout event if the server did not respond in X seconds or to send something after Y seconds. Joel -- http://wagerlabs.com/

On Fri, Dec 16, 2005 at 04:41:05PM +0300, Bulat Ziganshin wrote:
Hello Joel,
Friday, December 16, 2005, 3:22:46 AM, you wrote:
TZ> You don't have to check "every few seconds". You can determine TZ> exactly how much you have to sleep - just check the timeout/ event with TZ> the lowest ClockTime.
JR> The scenario above does account for the situation that you are JR> describing.
to be exact - Tomasz's variant don't work proper in this situation, but your code (which is not use this technique) is ok
Well, what I said was just a sketch. Of course you have to somehow handle timeout requests coming during the sleep. Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland

On Dec 14, 2005, at 7:48 PM, Tomasz Zielonka wrote:
You don't have to check "every few seconds". You can determine exactly how much you have to sleep - just check the timeout/event with the lowest ClockTime.
Something like this? Comments are welcome! It would be cool to not have to export and call initTimers somehow. --- {-# OPTIONS_GHC -fglasgow-exts -fno-cse #-} module Timer ( initTimers, startTimer, stopTimer ) where import qualified Data.Map as M import System.Time import System.IO.Unsafe import Control.Exception import Control.Concurrent --- Map timer name and kick-off time to action type Timers = M.Map (ClockTime, String) (IO ()) timeout :: Int timeout = 5000000 -- 1 second {-# NOINLINE timers #-} timers :: MVar Timers timers = unsafePerformIO $ newMVar M.empty --- Call this first initTimers :: IO () initTimers = do forkIO $ block checkTimers return () --- Not sure if this is the most efficient way to do it startTimer :: String -> Int -> (IO ()) -> IO () startTimer name delay io = do stopTimer name now <- getClockTime let plus = TimeDiff 0 0 0 0 0 delay 0 future = addToClockTime plus now block $ do t <- takeMVar timers putMVar timers $ M.insert (future, name) io t --- The filter expression is kind of long... stopTimer :: String -> IO () stopTimer name = block $ do t <- takeMVar timers putMVar timers $ M.filterWithKey (\(_, k) _ -> k /= name) t --- Tried to take care of exceptions here --- but the code looks kind of ugly checkTimers :: IO () checkTimers = do t <- takeMVar timers case M.size t of -- no timers 0 -> do putMVar timers t unblock $ threadDelay timeout -- some timers n -> do let (key@(time, name), io) = M.findMin t now <- getClockTime if (time <= now) then do putMVar timers $ M.delete key t unblock io else do putMVar timers t unblock $ threadDelay timeout checkTimers -- http://wagerlabs.com/

On 14.12 23:07, Joel Reymont wrote:
Something like this? Comments are welcome!
timeout :: Int timeout = 5000000 -- 1 second
Is that correct?
{-# NOINLINE timers #-} timers :: MVar Timers timers = unsafePerformIO $ newMVar M.empty
--- Call this first initTimers :: IO () initTimers = do forkIO $ block checkTimers return ()
Here is a nice trick for you: {-# NOINLINE timers #-} timers :: MVar Timers timers = unsafePerformIO $ do mv <- newMVar M.empty forkIO $ block checkTimers return mv initTimers goes thus away.
--- Not sure if this is the most efficient way to do it startTimer :: String -> Int -> (IO ()) -> IO () startTimer name delay io = do stopTimer name now <- getClockTime let plus = TimeDiff 0 0 0 0 0 delay 0 future = addToClockTime plus now block $ do t <- takeMVar timers putMVar timers $ M.insert (future, name) io t
I had code which used a global IORef containing the current time. It was updated once by a second by a dedicated thread, but reading it was practically free. Depends how common getClockTime calls are.
--- The filter expression is kind of long... stopTimer :: String -> IO () stopTimer name = block $ do t <- takeMVar timers putMVar timers $ M.filterWithKey (\(_, k) _ -> k /= name) t
And slow. This is O(size_of_map)
--- Tried to take care of exceptions here --- but the code looks kind of ugly
Is there a reason you need block for checkTimers? What you certainly want to do is ignore exceptions from the timer actions. - Einar Karttunen

Well, my understanding is that once I do a takeMVar I must do a putMVar under any circumstances. This is why I was blocking checkTimers. On Dec 15, 2005, at 12:08 AM, Einar Karttunen wrote:
Is there a reason you need block for checkTimers? What you certainly want to do is ignore exceptions from the timer actions.

On Thu, Dec 15, 2005 at 09:32:38AM +0000, Joel Reymont wrote:
Well, my understanding is that once I do a takeMVar I must do a putMVar under any circumstances. This is why I was blocking checkTimers.
Perhaps you could use modifyMVar: http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control-Concurren... modifyMVar_ :: MVar a -> (a -> IO a) -> IO () A safe wrapper for modifying the contents of an MVar. Like withMVar, modifyMVar will replace the original contents of the MVar if an exception is raised during the operation. modifyMVar :: MVar a -> (a -> IO (a, b)) -> IO b A slight variation on modifyMVar_ that allows a value to be returned (b) in addition to the modified value of the MVar. Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland

Something like this. If someone inserts a timer while we are doing our checking we can always catch it on the next iteration of the loop. --- Now runs unblocked checkTimers :: IO () checkTimers = do t <- readMVar timers -- takes it and puts it back case M.size t of -- no timers 0 -> threadDelay timeout -- some timers n -> do let (key@(time, name), io) = M.findMin t now <- getClockTime if (time <= now) then do modifyMVar_ timers $ \a -> return $! M.delete key a try $ io -- don't think we care return () else threadDelay timeout checkTimers On Dec 15, 2005, at 9:39 AM, Tomasz Zielonka wrote:
Perhaps you could use modifyMVar:
http://www.haskell.org/ghc/docs/latest/html/libraries/base/Control- Concurrent-MVar.html#v%3AmodifyMVar
modifyMVar_ :: MVar a -> (a -> IO a) -> IO ()
A safe wrapper for modifying the contents of an MVar. Like withMVar, modifyMVar will replace the original contents of the MVar if an exception is raised during the operation.

On Dec 15, 2005, at 12:08 AM, Einar Karttunen wrote:
timeout = 5000000 -- 1 second
Is that correct?
I think so. threadDelay takes microseconds.
Here is a nice trick for you:
Thanks!
--- The filter expression is kind of long... stopTimer :: String -> IO () stopTimer name = block $ do t <- takeMVar timers putMVar timers $ M.filterWithKey (\(_, k) _ -> k /= name) t
And slow. This is O(size_of_map)
Any way to optimize it? I need timer ids so that I can remove a timer before it expires. And I need ClockTime as key to so that I don't have to wake up every second, etc. Joel -- http://wagerlabs.com/

After a chat with Einar on #haskell I realized that I would have, say, 4k expiring timers and maybe 12k timers that are started and then killed. That would make a 16k element map on which 3/4 of the operations are O(n=16k) (Einar). I need a better abstraction I guess. I also need to be able to find timers by id instead of by name like now since each bot will use the same timer name for the same operation. I should have starTimer return X and then kill the timer using the same X. I'm looking for suggestions. Here's the improved code: --- {-# OPTIONS_GHC -fglasgow-exts -fno-cse #-} module Timer ( startTimer, stopTimer ) where import qualified Data.Map as M import System.Time import System.IO.Unsafe import Control.Exception import Control.Concurrent --- Map timer name and kick-off time to action type Timers = M.Map (ClockTime, String) (IO ()) timeout :: Int timeout = 5000000 -- 1 second {-# NOINLINE timers #-} timers :: MVar Timers timers = unsafePerformIO $ do mv <- newMVar M.empty forkIO $ checkTimers return mv --- Not sure if this is the most efficient way to do it startTimer :: String -> Int -> (IO ()) -> IO () startTimer name delay io = do stopTimer name now <- getClockTime let plus = TimeDiff 0 0 0 0 0 delay 0 future = addToClockTime plus now block $ do t <- takeMVar timers putMVar timers $ M.insert (future, name) io t --- The filter expression is kind of long... stopTimer :: String -> IO () stopTimer name = block $ do t <- takeMVar timers putMVar timers $ M.filterWithKey (\(_, k) _ -> k /= name) t --- Now runs unblocked checkTimers :: IO () checkTimers = do t <- readMVar timers -- takes it and puts it back case M.size t of -- no timers 0 -> threadDelay timeout -- some timers _ -> do let (key@(time, _), io) = M.findMin t now <- getClockTime if (time <= now) then do modifyMVar_ timers $ \a -> return $! M.delete key a try $ io -- don't think we care return () else threadDelay timeout checkTimers -- http://wagerlabs.com/

One idea would be to index the timer on ThreadId and name and stick Nothing into the timer action once the timer has been fired/stopped. Since timers are restarted with the same name quite often this would just keep one relatively big map in memory. The additional ThreadId would help distinguish the timers and avoid clashes. On Dec 15, 2005, at 10:41 AM, Joel Reymont wrote:
After a chat with Einar on #haskell I realized that I would have, say, 4k expiring timers and maybe 12k timers that are started and then killed. That would make a 16k element map on which 3/4 of the operations are O(n=16k) (Einar).
I need a better abstraction I guess. I also need to be able to find timers by id instead of by name like now since each bot will use the same timer name for the same operation. I should have starTimer return X and then kill the timer using the same X.

Does anyone have priority queue Haskell code that they would be willing to share or point me to? Thanks, Joel -- http://wagerlabs.com/

On Thursday 15 December 2005 11:50, Joel Reymont wrote:
Does anyone have priority queue Haskell code that they would be willing to share or point me to?
In fact I have one. It is based on 2-3 finger trees as described in a paper by Ralf Hinze. it is even better because it is a ordered sequence data type, rather than just a priority queue: all operations are constant time at both ends (max resp.min end). It is also quite memory efficient (last time I checked, it used about half the memory compared to data.Set). I can send you the sources (today, evening, can't access it at work). Ben

I would appreciate it, thank you Ben! Is this the paper? http://www.informatik.uni-bonn.de/ ~ralf/publications/UU-CS-2001-09.pdf On Dec 15, 2005, at 12:26 PM, Benjamin Franksen wrote:
I can send you the sources (today, evening, can't access it at work).

On 12/15/05, Joel Reymont
Does anyone have priority queue Haskell code that they would be willing to share or point me to?
Thanks, Joel
Here's one I wrote some time ago: http://www.dtek.chalmers.se/~sylvan/PriorityQueue/ It's implemented as skewed binomial trees. The oeprations insert, findMin and meld are all O(1), deleteMin is O(log n). That's optimal (well, the theoretical optimum is that all operations are O(1) except one which is O(log n), it could theoretically be some other function that's O(log n), but for this implementation it's deleteMin). In practice, for fewer elements, it may be faster to use other structures though (like lazy pairing heaps). The constant term is kinda high (though not *that* bad). /S -- Sebastian Sylvan +46(0)736-818655 UIN: 44640862

On Thu, Dec 15, 2005 at 10:46:55AM +0000, Joel Reymont wrote:
One idea would be to index the timer on ThreadId and name and stick Nothing into the timer action once the timer has been fired/stopped. Since timers are restarted with the same name quite often this would just keep one relatively big map in memory. The additional ThreadId would help distinguish the timers and avoid clashes.
I don't know how you use your timers, but perhaps startTimer could return a cancel action? It's type would be startTimer :: Int -> (IO ()) -> IO (IO ()) and you would use it like this cancel <- startTimer delay action ... cancel How cancelling was implemented would be entirely startTimer's business. Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland

Here's the latest and greatest version put together with Einar's help. The seconds portion of ClockTime and a counter are used as the key now and the counter wraps around. This would make two distinct timers even if there expiration times were the same. {-# OPTIONS_GHC -fglasgow-exts -fno-cse #-} module Timer ( startTimer, stopTimer ) where import qualified Data.Map as M import Data.IORef import System.Time import System.IO.Unsafe import Control.Exception import Control.Concurrent type Timers = M.Map (Integer, Int) (IO ()) timeout :: Int timeout = 5000000 -- 1 second {-# NOINLINE timers #-} {-# NOINLINE counter #-} timers :: MVar Timers counter :: IORef Int (timers, counter) = unsafePerformIO $ do mv <- newMVar M.empty c <- newIORef 0 forkIO $ checkTimers return (mv, c) startTimer :: Integer -> (IO ()) -> IO (Integer, Int) startTimer seconds io = do TOD now _ <- getClockTime let expiration = now + seconds id <- atomicModifyIORef counter $ \x -> (x + 1, x) modifyMVar_ timers $ \a -> return $! M.insert (expiration, id) io a return (expiration, id) stopTimer :: (Integer, Int) -> IO () stopTimer key = modifyMVar_ timers $ \a -> return $! M.delete key a checkTimers :: IO () checkTimers = do t <- readMVar timers -- takes it and puts it back case M.size t of -- no timers 0 -> threadDelay timeout -- some timers _ -> do let (key@(time, _), io) = M.findMin t TOD now _ <- getClockTime if (time <= now) then do stopTimer key try $ io -- don't think we care return () else threadDelay timeout checkTimers -- http://wagerlabs.com/

Hello Joel, Thursday, December 15, 2005, 2:42:03 PM, you wrote: JR> Here's the latest and greatest version put together with Einar's help. let's analyze execution of this thread. it has 2000-6000 events in his Map with an expiration time in the range 0-60 sec. it sleeps half a second, then wakes and finds/deletes minimal values from map until all events which are within this half-a-second will be performed and then sleeps again if half-second precision of performing events is appropriate for you, why don't use solution which holds all events for given second in one list? you can use array of such lists, or map of lists, or even ordered list of lists - it will contain only 60 elements at any time the most advanced solution will be array used as round buffer, whose size==maximal event timing i still don't understand why timings of your events may be different. you always say us that in each run timing is constant - 9 min, 1 min -- Best regards, Bulat mailto:bulatz@HotPOP.com

On Dec 15, 2005, at 9:51 PM, Bulat Ziganshin wrote:
if half-second precision of performing events is appropriate for you, why don't use solution which holds all events for given second in one list? you can use array of such lists, or map of lists, or even ordered list of lists - it will contain only 60 elements at any time
I think the thread wakes up every 5 seconds. Do I have 0.5s in the code? Thanks, Joel -- http://wagerlabs.com/

Hello Joel, Wednesday, December 14, 2005, 7:55:36 PM, you wrote: JR> With a 1 minute keep-alive timeout system is starting to get stressed JR> almost right away. There's verbose logging going on and almost every JR> event/packet sent and received is traced. The extra logging of the JR> timeout events probably adds to the stress and so, I assume, do the JR> extra packets. oh, yes, i forget to say that you can speed up logging bu using large buffer on logger hadnle, say use: hSetBuffering logger (BlockBuffering (Just 4096)) and of course avoid logging to the screen -- Best regards, Bulat mailto:bulatz@HotPOP.com

Here are statistics that I gathered. I'm almost done modifying the program to use 1 timer thread instead of 1 per bot as well as writing to the socket from the writer thread. This should reduce the number of threads from 6k (2k x 3) to 2k plus change. It appears that +RTS -k3k does make a difference. As per Simon, 2-4k avoids the thread being garbage collected because each thread gets its own block in the storage manager. Simon, did I get that right? BTW, how does garbage-collecting a thread works in this scenario? My threads are very long-running. The total is the number of bots launched, lobby is how many bots connected to the lobby. Failed is mostly due to connection reset by peer errors. The Windows C++ server uses IOCP and running a firewall was apparently interfering with that somehow. I hate Windows :-(. --- Test#1 +RTS -k3k as per Simon. Keep-alive timeout of 9 minutes. Total: 1961, Lobby: 1961, Failed: 0 Total: 2000, Lobby: 2000, Failed: 1 This test went smoothly and got to 2k connections very quickly. Maybe within 30 minutes or so. I did not gather CPU usage, etc. statistics. --- Test #2, No thread stack increase, 1 minute keep-alive timeout, more network traffic With a 1 minute timeout things run veeery slow. 86 physical and 158Mb of VM with 1k bots, CPU 50-60%. Data sent/received is 60-70 packets and 6-7kb/sec. Killed after a while. The statistics are phys/VM, CPU usage in % and #packets/transfer speed Total: 1345, Lobby: 1326, Failed: 0, 102/184, 50%, 90/8kb Total: 1395, Lobby: 1367, Failed: 2 Total: 1421, Lobby: 1394, Failed: 4 Total: 1490, Lobby: 1463, Failed: 4, 108/194, 50%, 110/11Kb Total: 1574, Lobby: 1546, Failed: 4, 113/202, 50%, 116/11kb --- Test #3, Rebuilding app with basic logging only (level 10). Stil veeery slow. Started ~6pm Total: 121, Lobby: 118, Failed: 1 Total: 521, Lobby: 509, Failed: 13, 46/104, 20-30%, 35/3kb Total: 1055, Lobby: 1044, Failed: 13, 94/168, 50% Total: 1325, Lobby: 1313, Failed: 13 Total: 1566, Lobby: 1553, Failed: 13, 126/215, 70-80%, Total: 1692, Lobby: 1680, Failed: 13, 136/228, 80% Total: 1728, Lobby: 1715, Failed: 13, 140/234, 85% Total: 1746, Lobby: 1733, Failed: 13, 140/235, 50-85%, 6:39pm Total: 1818, Lobby: 1805, Failed: 13, 145/240, 60-85%, Total: 1896, Lobby: 1883, Failed: 13, 153/250, 60-85%, 7:01pm Total: 1933, Lobby: 1919, Failed: 13, 155/255, 70-85%, 7:12pm System has 216Mb of spare physical memory at this point but the app seems to spend most of the time collecting garbage. Total: 1999, Lobby: 1986, Failed: 13, 162/262, 65-86%, 7:41pm -- http://wagerlabs.com/
participants (6)
-
Benjamin Franksen
-
Bulat Ziganshin
-
Einar Karttunen
-
Joel Reymont
-
Sebastian Sylvan
-
Tomasz Zielonka