
The topic of HANSEI in Haskell does come up from time to time. In fact, there was a Haskell edition of the first version of Hansei: http://okmij.org/ftp/kakuritu/probM.hs It was written to see how the code would look in Haskell, and how memoization (inherent in lazy evaluation of GHC) may help. The code contains some of the tests from the HANSEI distribution. The experience showed that lazy evaluation hurts -- a lot. The problem is not in lack of seq -- adding seq would make things even worse. There is a bit of explanation at the end of probM.hs. The gist of the problem is that approximate inference (sampling, to be precise) trades space for time. The search tree is way too large to fit into memory. Therefore, it is better to recompute the values (the branches of the tree) than to keep storing them. That is precisely the opposite to the trade-off of lazy evaluation. The end result is attempting to allocate gigabytes (really!), even for toy problems. We can get around the problem by using thunks explicitly. But then we lose the remaining benefits of Haskell syntax (the ones that are left after we have to switch to the monadic notation). OCaml becomes quite compelling then. It must be stressed that laziness in general is _crucial_ for solving even moderately complex problems. However, the needed laziness is NOT of the kind provided by GHC. We need memoization specific to a possible world, rather than the one that affects all possible worlds. GHC gives only the latter. The sort of laziness needed for non-deterministic programming calls for first-class store. It can be emulated, albeit imperfectly, for example, as was described in the ICFP09 paper. Hansei uses a quite similar idea. Ideally first-class memory is provided as built-in, since it requires interaction with garbage collection. Greg Morrisett has written extensively on this topic and done a few implementations; see for example, http://www.eecs.harvard.edu/~greg/papers/jgmorris-callcs.ps Sadly, the topic has been forgotten.