lazy data structure for best-first search

I am looking for a good (preferably lazy) way to implement some kind of best-first search. The problem is, the expansion of the 'best' node in the search space forces other 'inferior' nodes to expand too. So my function expand :: Node -> ([Node],(Node -> [Node])) does not only return some successor nodes, but also a function to expand other Nodes with certain characteristics. So in fact, after one expansion, I need to fold over my complete search space, maybe updating other nodes and recompute the value of the cost function for the affected nodes. At the moment I am using StateT monad transformers, and retrieving the affected Nodes from a map (to avoid going through whole space), but this is not very elegant and I doubt whether it is efficient, too. I have already read through Spivey's paper "Algebras for combinatorial search". Unfortunately, he only proposes the use of heuristics and best-first search in combination with his framework for future work. Does anybody now some further papers on this topic? Thanks a lot in advance, Martin

On Wed, Jun 24, 2009 at 7:53 PM, Martin Hofmann < martin.hofmann@uni-bamberg.de> wrote:
I am looking for a good (preferably lazy) way to implement some kind of best-first search.
Here's what I came up with. bestFirst :: (Ord o) => (a -> o) -> (a -> [a]) -> [a] -> [a] bestFirst rate edges = go where go [] = [] go (x:xs) = x : go (mergeOn rate (edges x) xs) mergeOn :: (Ord o) => (a -> o) -> [a] -> [a] -> [a] mergeOn f [] ys = ys mergeOn f xs [] = xs mergeOn f (x:xs) (y:ys) | f x <= f y = x : mergeOn f xs (y:ys) | otherwise = y : mergeOn f (x:xs) ys The preconditions for bestFirst rate edges xs are: map rate xs must be nondecreasing, and for all x, map rate (edges x) must be nondecreasing, and all no less than x. Then I believe this is A*. At least in spirit it is, though there might be "too many merges" making the complexity worse. I'm not terribly good at analyzing those things. Luke
The problem is, the expansion of the 'best' node in the search space forces other 'inferior' nodes to expand too. So my function
expand :: Node -> ([Node],(Node -> [Node]))
does not only return some successor nodes, but also a function to expand other Nodes with certain characteristics.
So in fact, after one expansion, I need to fold over my complete search space, maybe updating other nodes and recompute the value of the cost function for the affected nodes.
At the moment I am using StateT monad transformers, and retrieving the affected Nodes from a map (to avoid going through whole space), but this is not very elegant and I doubt whether it is efficient, too.
I have already read through Spivey's paper "Algebras for combinatorial search". Unfortunately, he only proposes the use of heuristics and best-first search in combination with his framework for future work. Does anybody now some further papers on this topic?
Thanks a lot in advance,
Martin
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Thanks for the quick and short answer. Maybe I am already thinking too complicated. However, exactly your given preconditions I can not satisfy.
The preconditions for bestFirst rate edges xs are: map rate xs must be nondecreasing,
Here lies my problem, because edges must also be applied to xs. So a quick fix of bestFirst would be the following: bestFirst' :: (Ord o) => (a -> o) -> (a -> [a]) -> [a] -> [a] bestFirst' rate edges = go where go [] = [] -- go (x:xs) = x : go (mergeOn rate (edges x) xs) go (x:xs) = x : go (mergeOn rate (edges x) (sortBy rate (concatMap edges xs)) And here I am afraid, that my efficiency gets worse. Or do I worry too much? Thanks

Martin Hofmann wrote:
Thanks for the quick and short answer. Maybe I am already thinking too complicated.
However, exactly your given preconditions I can not satisfy.
The preconditions for bestFirst rate edges xs are: map rate xs must be nondecreasing,
Here lies my problem, because edges must also be applied to xs. So a quick fix of bestFirst would be the following:
bestFirst' :: (Ord o) => (a -> o) -> (a -> [a]) -> [a] -> [a] bestFirst' rate edges = go where go [] = [] -- go (x:xs) = x : go (mergeOn rate (edges x) xs) go (x:xs) = x : go (mergeOn rate (edges x) (sortBy rate (concatMap edges xs))
And here I am afraid, that my efficiency gets worse.
Or do I worry too much?
If you have to re-sort the data at each step, then you will have to explore/generate all options (or near enough) in order to select the best first. This is a natural consequence of not having any "control" over the order of generation. If you can maintain that the results of each step are stored in an order such that the priority function is monotone wrt the input sequences, then you can start doing things like lazy cube pruning[1] which allow you to defer or prune sub-optimal options when generating the best-first list to the next layer. As the expectation of monotonicity falls, you end up exploring more suboptimal combinations unnecessarily. [1] http://www.cis.upenn.edu/~lhuang3/huang-iwpt-correct.pdf -- Live well, ~wren

On Wed, Jun 24, 2009 at 6:53 PM, Martin
Hofmann
I am looking for a good (preferably lazy) way to implement some kind of best-first search.
So in fact, after one expansion, I need to fold over my complete search space, maybe updating other nodes and recompute the value of the cost function for the affected nodes.
A bit late, but it just dawned on me that if I understand you correctly I may have stumbled on the same problem when I was implementing the code here http://blog.sigfpe.com/2009/07/monad-for-combinatorial-search-with.html I tried to modify that code so that I could assign costs that were of type Float. But I found that it was computing costs for more of the graph than I wanted and that for infinite search spaces I'd never get termination. The code I ended up writing in that post solves your problem, in effect by storing costs lazily. That way you can compare the cost of A and B without having to force a full computation of the cost of both A and B and making it safe to fold over infinite search trees. In fact, my zipm function is essentially a fold with a lazy version of the mergeOn function that Luke Palmer suggests in another reply. That code is best when the costs are all small non-negative integers, which may or may not be adaptable to your problem. -- Dan

Thanks Dan, that gave me some new input I can continue working on. Cheers, Martin Am Dienstag, den 07.07.2009, 10:18 -0700 schrieb Dan Piponi:
On Wed, Jun 24, 2009 at 6:53 PM, Martin Hofmann
wrote: I am looking for a good (preferably lazy) way to implement some kind of best-first search.
So in fact, after one expansion, I need to fold over my complete search space, maybe updating other nodes and recompute the value of the cost function for the affected nodes.
A bit late, but it just dawned on me that if I understand you correctly I may have stumbled on the same problem when I was implementing the code here http://blog.sigfpe.com/2009/07/monad-for-combinatorial-search-with.html
I tried to modify that code so that I could assign costs that were of type Float. But I found that it was computing costs for more of the graph than I wanted and that for infinite search spaces I'd never get termination. The code I ended up writing in that post solves your problem, in effect by storing costs lazily. That way you can compare the cost of A and B without having to force a full computation of the cost of both A and B and making it safe to fold over infinite search trees. In fact, my zipm function is essentially a fold with a lazy version of the mergeOn function that Luke Palmer suggests in another reply.
That code is best when the costs are all small non-negative integers, which may or may not be adaptable to your problem. -- Dan --
participants (4)
-
Dan Piponi
-
Luke Palmer
-
Martin Hofmann
-
wren ng thornton