
On Thu, Oct 18, 2012 at 4:23 AM, Janek S.
Dear list,
during past few days I spent a lot of time trying to figure out how to write Criterion benchmarks, so that results don't get skewed by lazy evaluation. I want to benchmark different versions of an algorithm doing numerical computations on a vector. For that I need to create an input vector containing a few thousand elements. I decided to create random data, but that really doesn't matter - I could have as well use infinite lists instead of random ones.
My problem is that I am not certain if I am creating my benchmark correctly. I wrote a function that creates data like this:
dataBuild :: RandomGen g => g -> ([Double], [Double]) dataBuild gen = (take 6 $ randoms gen, take 2048 $ randoms gen)
And I create benchmark like this:
bench "Lists" $ nf L.benchThisFunction (L.dataBuild gen)
The argument value will be evaluated by the first run of the bench-mark, and then laziness will keep the value around for the next few hundred runs that the "bench" function performs. So the evaluation will be included in the benchmark, but if "bench" is doing enough trials it will be statistical noise. Antoine