
Hello, IMO, the conclusion about instant cache misses due to several threads sharing memory and/or performing large memory consumption is very highly probable, especially on Intel CPUs with shared L2 cache. I have several examples, where threading means significant time consumption increase (<new time> = <number of threads> * <old time>). My personal conclusion - use linear recursive functions only (so that they could be optimized), Int instead of Integer, if possible, no data structure traversal (unless such a structure is very small, L2 caches are several MBs only). Such a way cache misses are minimized for both/all threads. Moreover, OS needs some time instantly (=> cache refill/misses), thus, I've devoted one core for OS, others for computation (quad core), which brings certain improvement and more accurate measurements. Regards Dusan Bulat Ziganshin wrote:
Hello Andrew,
Tuesday, March 3, 2009, 9:21:42 PM, you wrote:
I just tried it with GHC 6.10.1. Two capabilities is still slower. (See attachments. Compiled with -O2 -threaded.)
i don't think so:
Total time 4.88s ( 5.14s elapsed)
Total time 7.08s ( 4.69s elapsed)
so with 1 thread wall clock time is 5 seconds, with 2 thread wall time is 4.7 seconds
cpu time spent increased with 2 threads - this indicates that you either use hyperthreaded/SMT-capable cpu or speed is limited by memory access operations
so, my conclusion - this benchmark limited by memory latencies so it cannot be efficiently multithreaded