Memoized game-tree evaluation

I'm working on defining a data type for representing game-trees for chess or other two-person board games. My plan is to memoize the evaluations of a game-state at each depth so that values don't need to be recomputed for future moves. Here's the definition I'm using so far: data GameState = GameState {color :: Color, board :: Board} data GameTree = GameTree {state :: GameState, values :: [Int], branches :: [GameTree]} genGameTree :: GameState -> GameTree genGameTree st | finalState st = GameTree st (repeat (evalState st)) [] | otherwise = GameTree st vs bs where bs = map genGameTree (nextStates st) vs = (evalState st) : map (\d -> (optimize (color st)) (map ((!!d) . values) bs)) [0..] optimize Black = minimum optimize White = maximum I'd like to rewrite the generation of "vs" to evaluate using alpha- beta cutoffs, but I'm having trouble coming up with a way to do so without losing memoization. I don't know if my failures so far are because ab-pruning is necessarily incompatible with memoization, or if I simply haven't found the right insight yet. Can anyone suggest a solution, or convince me there isn't one? Thanks in advance.

On Sat, Jul 25, 2009 at 10:08:20AM -0400, Matthew wrote:
I'm working on defining a data type for representing game-trees for chess or other two-person board games. My plan is to memoize the evaluations of a game-state at each depth so that values don't need to be recomputed for future moves.
My suggestion is that it probably isn't worth it. You'll consume so much memory that your performance will actually get a lot worse than if you recomputed everything everytime. Well, at least for chess this is true. -- Felipe.
participants (2)
-
Felipe Lessa
-
Matthew