
I've been reading the classic "Why functional programming matters" paper [1] lately, particularly looking at the alpha beta stuff. I've ported all his code to haskell, but I have a question. His algorithm takes a board position, creates a gametree out of it, maps a static evaluation function over the gametree, then uses alpha beta to find the real evaluation from a Tree Int. But of course, in an actual application, what you actually want is the best MOVE from the current position (or, even more ideally, the so-called "principal variation", which is the best series of moves from the current position). Is there a good way to collect this, without mapping some sort of function over the tree that puts a list of moves on every node too? Hughes seems to completely ignore this, and I wonder if it's because it gets ugly to implement.