
I'm working on defining a data type for representing game-trees for chess or other two-person board games. My plan is to memoize the evaluations of a game-state at each depth so that values don't need to be recomputed for future moves. Here's the definition I'm using so far: data GameState = GameState {color :: Color, board :: Board} data GameTree = GameTree {state :: GameState, values :: [Int], branches :: [GameTree]} genGameTree :: GameState -> GameTree genGameTree st | finalState st = GameTree st (repeat (evalState st)) [] | otherwise = GameTree st vs bs where bs = map genGameTree (nextStates st) vs = (evalState st) : map (\d -> (optimize (color st)) (map ((!!d) . values) bs)) [0..] optimize Black = minimum optimize White = maximum I'd like to rewrite the generation of "vs" to evaluate using alpha- beta cutoffs, but I'm having trouble coming up with a way to do so without losing memoization. I don't know if my failures so far are because ab-pruning is necessarily incompatible with memoization, or if I simply haven't found the right insight yet. Can anyone suggest a solution, or convince me there isn't one? Thanks in advance.