
On Wed, Jun 4, 2008 at 9:48 AM, Loup Vaillant
I see a problem with this particular fusion, though: It changes the space complexity of the program, from linear to constant. Therefore, with some programs, relying on this "optimization" can be a matter of correctness, not just performance. Therefore, if it is not clear when the compiler will optimize this, I'd rather not use it. (A counter example is tail calls, which are rather easily deducible, at least in a strict context)
Haskell is not required to be lazy, only non-strict. That is, Haskell as a language is free to re-evaluate expressions bound in let clauses. This can change the time and space complexity of a program. To me, time and space complexity is not about correctness but performance. Given unbounded time and space, you will still arrive at the same result regardless of the complexity. What makes the asymptotic class more blessed than the associated constants? However I still see your point. If optimizations cannot be guaranteed--the conditions under which they fire are brittle--then the language can be yet harder to predict (which is not something Haskell needs!). It's hard to turn down an optimization which will accidentally asymptotically improve your program, however. I wonder what can be said about "stable" optimizations which are insensitive to their environments in some sense. Luke