
The old article Preventing memoization in (AI) search problems http://okmij.org/ftp/Haskell/index.html#memo-off deals with the problem, explaining the trick to deliberately confuse GHC so that it won't perform memoization (sharing). Yes, I know how bad this confusing of GHC sounds: which is part of my argument that lazy evaluation by default was a mistake.

On Mon, Dec 21, 2015 at 08:17:10PM +0900, Oleg wrote:
The old article Preventing memoization in (AI) search problems http://okmij.org/ftp/Haskell/index.html#memo-off
deals with the problem, explaining the trick to deliberately confuse GHC so that it won't perform memoization (sharing). Yes, I know how bad this confusing of GHC sounds: which is part of my argument that lazy evaluation by default was a mistake.
Hi Oleg, As I explained here https://mail.haskell.org/pipermail/haskell-cafe/2013-February/106673.html -fno-full-laziness fixes the space leak issue in your iterative deepening example. This isn't a problem with laziness. It's a problem with performing a time optimization which is a space pessimization. In the absence of the "optimization" there is no problem. Tom
participants (2)
-
Oleg
-
Tom Ellis