
Folks, I have a huge space leak someplace and I suspect this code. The SrvServerInfo data structure is something like 50K compressed or uncompressed byte data before unpickling. My thousands of bots issue this request at least once and I almost run out of memory with 100 bots on a 1Gb machine on FreeBSD. Do I need deepSeq somewhere below? This is the read. read :: Handle -> (SSL, BIO, BIO) -> IO Command read h _ = do sa <- emptyByteArray 4 hGetArray h sa 4 (size', _) <- unpickle endian32 sa 0 let size = fromIntegral $ size' - 4 packet <- emptyByteArray size hGetArray h packet size unstuff packet 0 I suspect that I need to deepSeq cmd'' instead of return $! cmd'' unstuff :: MutByteArray -> Index -> IO Command unstuff array ix = do (kind, ix1) <- unpickle puCmdType array ix (cmd', _) <- unpickle (puCommand kind) array ix1 case cmd' of InvalidCommand -> do fail $ "unstuff: Cannot parse " ++ show array SrvCompressedCommands sz bytes -> do bytes' <- uncompress bytes (fromIntegral sz) cmd'' <- unstuff bytes' 4 return $! cmd'' _ -> return cmd' This is where the list of active tables is converted to a table id list of [Word32]. pickTable _ filters (Cmd cmd@(SrvServerInfo {})) = do let tables = filter (tableMatches filters) $ activeTables cmd ids = map tiTableID tables case tables of [] -> fail $ "pickTable: No tables found: " ++ show filters _ -> do pop stoptimer "pickTable" return $! Eat $! Just $! Custom $! Tables $! ids This is where the table id list of [Word32] is consumed. takeEmptySeat _ aff_id _ (Custom (Tables ids@(table:rest))) = do trace 85 $ "takeEmptySeat: " ++ show (length ids) ++ " tables found" trace 100 $ "takeEmptySeat: tables: " ++ showTables ids trace 85 $ "takeEmptySeat: trying table# " ++ show table w <- get put_ $ w { tables_to_try = rest } push "goToTable" $ goToTable table aff_id -- kick off goToTable return $ Eat $ Just Go This is the SrvServerInfo structure. | SrvServerInfo { activeTables :: ![TableInfo], -- Word16/ removedTables :: ![Word32], -- Word16/ version :: !Int32 } And this is the table info itself. data TableInfo = TableInfo { tiAvgPot :: !Word64, tiNumPlayers :: !Word16, tiWaiting :: !Word16, tiPlayersFlop :: !Word8, tiTableName :: !String, tiTableID :: !Word32, tiGameType :: !GameType, tiInfoMaxPlayers :: !Word16, tiIsRealMoneyTable :: !Bool, tiLowBet :: !Word64, tiHighBet :: !Word64, tiMinStartMoney :: !Word64, tiMaxStartMoney :: !Word64, tiGamesPerHour :: !Word16, tiTourType :: !TourType, tiTourID :: !Word32, tiBetType :: !BetType, tiCantReturnLess :: !Word32, tiAffiliateID :: ![Word8], tiLangID :: !Word32 } deriving (Show, Typeable) Thanks, Joel -- http://wagerlabs.com/

Hello Joel, Friday, December 16, 2005, 2:44:00 PM, you wrote: JR> I have a huge space leak someplace and I suspect this code. The JR> SrvServerInfo data structure is something like 50K compressed or JR> uncompressed byte data before unpickling. My thousands of bots issue JR> this request at least once and I almost run out of memory with 100 JR> bots on a 1Gb machine on FreeBSD. Do I need deepSeq somewhere below? 1. try to use 3-generations GC. this may greatly help in reducing GC times 2. manually add {-# UNPACK #-} to all simple fields (ints, words, chars). don't use "-f-unbox-strict-fields" because it can unbox whole structures instead of sharing them 3. in my experience, it's enough to mark all fields in massively used structures as strict and then eval highest level of such structures (using "return $! x"). after that the whole structure will be fully evaluated. but when you use a list, you must either manually eval whole list (using "return $! length xs") or use DeepSeq, as you suggest, because lists remain unevaluated depite all these sctrictness annotations 4. you can try to use packed strings or unboxed arrays instead of lists. in my experience this can greatly reduce GC time just because this array don't need to be scanned on each GC 5. what is the "uncompress" function here? can i see its code? 6. why EACH bot receives and processes this 50k structure itself? can't that be done only one time for all? JR> do let tables = filter (tableMatches filters) $ activeTables cmd JR> ids = map tiTableID tables JR> return $! Eat $! Just $! Custom $! Tables $! ids here `ids` definitely will be unevaluated, except for the first element. add "return $! length ids" before the last line ps: last week i also fight against memory requirements of my own program. as a result, they was reduced 3-4 times :) -- Best regards, Bulat mailto:bulatz@HotPOP.com
participants (2)
-
Bulat Ziganshin
-
Joel Reymont