
On Mon, 2007-09-03 at 15:35 -0400, Andrew Wagner wrote:
I've been reading the classic "Why functional programming matters" paper [1] lately, particularly looking at the alpha beta stuff. I've ported all his code to haskell, but I have a question.
His algorithm takes a board position, creates a gametree out of it, maps a static evaluation function over the gametree, then uses alpha beta to find the real evaluation from a Tree Int. But of course, in an actual application, what you actually want is the best MOVE from the current position (or, even more ideally, the so-called "principal variation", which is the best series of moves from the current position). Is there a good way to collect this, without mapping some sort of function over the tree that puts a list of moves on every node too?
Hughes seems to completely ignore this, and I wonder if it's because it gets ugly to implement.
You have the current position, you have all the moves you can make from that position (conveniently provided by moves no less). You evaluate gametree on the new position made by making each of those moves and pick the best. There are issues with this, but it should be straightforward to label the "edges" and keep track of the path as you explore the tree.