
Hello Louis, Sunday, February 22, 2009, 2:59:05 AM, you wrote:
Sebastian, that's not Bulat's point. He's saying that if we make that optimization in Haskell, we should at least make the same optimization in GCC for fair comparison. (Though I'm not entirely sure that that optimization would be of any use to GCC, but that's a linguistic concern, no more.)
:))) it was *ghc* who replaced 64 additions with just one: sum += [n..n+k] ==> sum += n*k+const my first program had this optimization by mistake. i've found way to avoid it completely, Don found way to use it only in Haskell :)
His point is valid. But Don's results *not* obtained by optimizing in this fashion are valid comparisons, and the results obtained with this optimization are useful for other reasons.
of course these results are useful! my own goal was just to make fair comparison. i'm bothered when people said that ghc should be used for something like video codecs based on those "let's optimize only for haskell" pseudo-benchmarks. if Don was omitted unoptimized gcc results from is chart and you don't wrote those "conclusions" based on the chart, i will never make this comment
Louis Wasserman wasserman.louis@gmail.com
On Sat, Feb 21, 2009 at 5:55 PM, Sebastian Sylvan
wrote:
On Sat, Feb 21, 2009 at 11:35 PM, Bulat Ziganshin
wrote: Hello Louis,
Sunday, February 22, 2009, 2:30:23 AM, you wrote:
yes, you are right. Don also compared results of 64x-reduced computation with full one. are you think that these results are more fair?
Yes. Clearly so. It still computes the result from scratch - it just uses a trick which generates better code. This is clearly a useful and worthwhile exercise as it shows A) A neat trick with TH, B) A reasonably practical way to produce fast code for the critical parts of a Haskell app, C) a motivating example for implementing a compiler optimization to do it automatically.
Just outputting the precomputed result means nothing.
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com