
Dear List; I have parallelized RSA decryption and encryption schemes by using second generation strategies of Haskell p.l. In that case, what I got in the sense of speed up was nearly 10 times of better performances (on a quad-core CPU with 8M cache) in the parallel evaluation of 125K long plain text with 180-bit of an encryption key, in comparison with the serial evaluation, abnormally. All my thoughts are in the direction of getting wrong serial-time performances. My question here is that is there any difference between serial and parallel evaluation of any arbitrary computation in terms of "cache usage methodologies" in Haskell compiler design. If no; what could be other possible reasons? I would be much more than appreciated, if somebody brings an acceptable pinpoint for the issue. Many thanks in advance. Kind regards Burak.

On 03/05/2012 04:58 PM, burak ekici wrote:
Dear List;
I have parallelized RSA decryption and encryption schemes by using second generation strategies of Haskell p.l.
In that case, what I got in the sense of speed up was nearly 10 times of better performances (on a quad-core CPU with 8M cache) in the parallel evaluation of 125K long plain text with 180-bit of an encryption key, in comparison with the serial evaluation, abnormally.
The explanation for this kind of thing is usually that all the working data suddenly fits within the per-CPU L2 cache when split up. Regards,
participants (2)
-
Bardur Arantsson
-
burak ekici