
This is an opportunity cost minimization problem: http://www.haskell.org/pipermail/haskell-cafe/2009-November/068435.html One of the worst (most unoptimized and conflated) solutions is to force some determinism at the low-level language architecture specifically targetted to achieve determinism in some domain at the higher level (which actually doesn't achieve it as the aliasing error gets pushed around any way). It analogous to pulling all your teeth out so you won't get cavities, considering the power that lazy evaluation and pure referential transparency adds to algorithm expression, composability, OOP, optimization opportunities (in many domains, e.g. speed, algorithmic resonance, concurrency, etc): http://www.haskell.org/pipermail/haskell-cafe/2009-November/068432.html Thus my analysis so far is Haskell has it correct, and I am suggesting the missing optimization is to let us automatically put an upper bound on the space non-determinism (the topic of this thread), then the programmer can optimize beyond that with profiling and strategy placement of seq and type constraints[1]. [1]Hudak, Hughes, Peyton Jones, Wadler (2007). "A History of Haskell: being lazy with class" (¶32 §10.3, "Controlling evaluation order" and ¶32 §10.2, "Space profiling")