
On Thursday 08 July 2004 20:40, paolo veronelli wrote:
Most of my imperative pieces of software find their answers by touching around in some space of solutions and my favourite approximation algorithms use random distributions.
Is it haskell the wrong languages for those, as I'm obliged to code them inside Monads loosing the benefits of lazyness?
No, no reason to lose laziness. First off, suppose your algorithm is a loop that needs one random number per iteration. An obvious structure would be: loop :: Solution -> IO Solution loop old_solution = do a <- randomIO let new_solution = next a old_solution if goodEnough new_solution then return new_solution else loop new_solution With this structure, the loop is strict and imperative but the 'next' function which improves the solution is pure, lazy code. Except in trivial cases, the next function will be where all the action is so you still get most of the benefit of laziness. We can do better though. Using two functions in System.Random, it's easy to get an infinite list of random numbers: randomRsIO :: IO [Int] randomRsIO = do g <- getStdGen return (randoms g) Let's rewrite the above code to use a list of random numbers and an initial solution to search for a better solution: genSolutions :: [Int] -> Solution -> [Solution] genSolutions (r:rs) s0 = s0 : genSolutions (next r s0) -- slightly more cryptic implementation using foldl and map possible -- and probably preferable findGoodSolution :: [Int] -> Solution -> Solution findGoodSolution rs s0 = head (dropWhile (not . goodEnough) solutions) where solutions = genSolutions rs s0 main = do rs <- randomRsIO print (findGoodSolution rs initial_solution) Hope this helps, -- Alastair Reid