I don't know if you have already read them,
Hi Janek,[snip]
On 18/10/12 10:23, Janek S. wrote:
during past few days I spent a lot of time trying to figure out how to write Criterion benchmarks,
so that results don't get skewed by lazy evaluation. I want to benchmark different versions of an
algorithm doing numerical computations on a vector. For that I need to create an input vector
containing a few thousand elements. I decided to create random data, but that really doesn't
matter - I could have as well use infinite lists instead of random ones.
Something like this might work, not sure what the canonical way is.
The question is how to generate data so that its evaluation won't be included in the benchmark.
---8<---
main = do
...
let input = L.dataBuild gen
evaluate (rnf input)
defaultMain
...
bench "Lists" $ nf L.benchThisFunction input
...
---8<---
I did use something like this in practice here:
https://gitorious.org/bitwise/bitwise/blobs/master/extra/benchmark.hs#line155
Thanks,
Claude
--
http://mathr.co.uk
_______________________________________________
Haskell-Cafe mailing list
Haskell-Cafe@haskell.org
http://www.haskell.org/mailman/listinfo/haskell-cafe