
Am 23.10.2017 um 23:49 schrieb Florian Weimer:
It is possible that Hotspot optimizes away most of the code, perhaps even the entire array allocation. Meaningful benchmarking is quite difficult due to such effects.
Does the execution time even change if you increase the number of iterations (inner or outer)?
I tried that in Java, with this code: private static final int SIZE = 400_0000; private static final int ITERATIONS = 20; public static void main(String[] args) { for (int i = 1; i < ITERATIONS; i++) { long start = System.currentTimeMillis(); Long [] a = new Long[SIZE]; for (int j = 0; j < SIZE; j++) { a[j] = 1L; } long end = System.currentTimeMillis(); System.out.println((end - start) + " ms"); } } Fiddling with SIZE and ITERATIONS led me to the following conclusions: 1) The first two iterations are warm-up, afterwards the times are pretty constant (about 7-8 ms for SIZE=100_000). 2) Cranking ITERATIONS up gave semi-regular slow iterations. I blame GC. 3) Per-loop run time seems to be roughly linear with array size, that's a strong indicator that the assignments are not optimized away. (In fact having a running time of 1-2 ms is a strong indicator that the software is in fact blowing some real CPU cycles.) It is actually possible that the standard HotSpot JVM is faster than Haskell, for this type of task - i.e. just array manipulation with primitive values, and working set size small enough to make GC irrelevant. It's the kind of workload where HotSpot has all the information needed to fully optimize everything short of eliminating the loop altogether (which is usually not worth it for a mutable-language just-in-time compiler). Make things a bit more complicated and real-world, and this advantage will shrink considerably: Throw in a few classes, subclasses that override functions (Haskell equivalent would be unevaluated expressions in the list), iterate over a list instead of an array (which will actually make the Haskell program faster if the list spine can be optimized away), that kind of stuff. That said, HotSpot is a pretty tough opponent to beat. It has person-decates if not person-centuries of optimization under its belt after all. So does GHC, so maybe the comparison isn't so unfair after all, but saying "Haskell is slow" from a single microbenchmark simply does not hold any value. Still, it is possible that the JVM is indeed faster for some kinds of workload. If that workload is actually relevant, it might be a valid optimization potential. Just my 2 cents :-) Regards, Jo