
On Tue, Jan 31, 2012 at 1:22 PM, Gregory Collins
I completely agree on the first part, but deepseq is not a panacea either. It's a big hammer and overuse can sometimes cause wasteful O(n) no-op traversals of already-forced data structures. I also definitely wouldn't go so far as to say that you can't do serious parallel development without it!
I agree. The only time I ever use deepseq is in Criterion benchmarks, as it's a convenient way to make sure that the input data is evaluated before the benchmark starts. If you want a data structure to be fully evaluated, evaluate it as it's created, not after the fact.
The only real solution to problems like these is a thorough understanding of Haskell's evaluation order, and how and why call-by-need is different than call-by-value. This is both a pedagogical problem and genuinely hard -- even Haskell experts like the guys at GHC HQ sometimes spend a lot of time chasing down space leaks. Haskell makes a trade-off here; reasoning about denotational semantics is much easier than in most other languages because of purity, but non-strict evaluation makes reasoning about operational semantics a little bit harder.
+1 We can do a much better job at teaching how to reason about performance. A few rules of thumb gets you a long way. I'm (slowly) working on improving the state of affairs here. -- Johan