
On 01/13/2014 05:21 PM, Kyle Van Berendonck wrote:
Hi,
I'd like to work on the primitives first. They are relatively easy to implement. Here's how I figure it;
The internal representation of the floats in the cmm is as a Rational (ratio of Integers), so they have "infinite precision". I can implement all the constant folding by just writing my own operations on these rationals; ie, ** takes the power of the top/bottom and reconstructs a new Rational, log takes the difference between the log of the top/bottom etc. This is all very easy to fold.
What about sin(), etc? I don't think identities will get you out of computing at least some irrational numbers. (Maybe I'm missing your point?)
Since the size of floating point constants is more of an architecture specific thing
IEEE 754 is becoming more and more ubiquitous. As far as I know, Haskell Float is always IEEE 754 32-bit binary floating point and Double is IEEE 754 64-bit binary floating point, on machines that support this (including x86_64, ARM, and sometimes x86). Let's not undermine this progress.
and floats don't wrap around like integers do, it would make more sense (in my opinion) to only reduce the value to the architecture specific precision (or clip it to a NaN or such) in the **final** stage as apposed to trying to emulate the behavior of a double native to the architecture (which is a very hard thing to do, and results in precision errors
GCC uses MPFR to exactly emulate the target machine's rounding behaviour.
the real question is, do people want precision errors when they write literals in code,
Yes. Look at GCC. If you don't pass -ffast-math (which says you don't care if floating-point rounding behaves as specified), you get the same floating-point behaviour with and without optimizations. This is IMHO even more important for Haskell where we tend to believe in deterministic pure code. -Isaac