Hi,

I discovered that in base logBase, the functions representing arbitrary logarithms, are defined in terms of two applications of log. log, the functions representing the natural logarithm, just pass responsibility on to some primitives. That is fine, mathematically speaking. But from an implementation standpoint, I wonder why one would do that.

The logarithm that should be the fastest and the most precise one to approximate by a cpu should be the one to base two. In fact if one already has a floating point representation, it should be almost ridiculously easy to compute from the exponent.

I suppose it's probably a good idea to use primitive functions for the harder cases to get some speed-up. What I don't see is why the choice seems to have been an xor instead of an and.

I am by absolutely no means an expert on these things, so I am probably missing something here. Is there actually a good reason for this choice? For example will ghc optimize such logarithms anyway?

Cheers,

MarLinn