
On Tue, Oct 28, 2008 at 08:55:59PM +0100, Henning Thielemann wrote:
For example, is integer arithmetic faster or slower than floating-point?
In principle integer arithmetic is simpler and faster. But your processor may do it in the same time.
Indeed. Usually there are more integer arithmetic units, so more integer arithmetic can be done in parallel.
Is addition faster or slower than multiplication?
Multiplication can be done almost as fast as addition. This is so, because a sum of n numbers can be computed much faster than n individual additions.
How much slower are the trigonometric functions?
division, square root are similar in complexity. exp, log, arctan can be implemented with a school division like algorithm (CORDIC) and are similar in performance.
Last I looked (which was quite a while back, but considerably more recent than the 68k or Z80...), floating point division was the surprise slow operation (close to the cost of a transcendental), with square root being 5-10 times faster. Generally, floating point multiplies and adds have a throughput of 1 or 2 per clock cycle, with most modern processors having a fused mutliply-add. It does pay to reduce the number of divides and floating point operations... but probably not if you're writing in Haskell. :) David