
On Tue, Sep 21, 2010 at 12:08, Daniel Fischer
Endianness only matters when marshaling bytes into a single value -- Data.Binary.Get/Put handles that. Once the data is encoded as a Word, endianness is no longer relevant.
I mean, take e.g. 2^62 :: Word64. If you poke that to memory, on a big- endian machine, you'd get the byte sequence 40 00 00 00 00 00 00 00 while on a little-endian, you'd get 00 00 00 00 00 00 00 40 , right?
If both bit-patterns are interpreted the same as doubles, sign-bit = 0, exponent-bits = 0x400 = 1024, mantissa = 0 , thus yielding 1.0*2^(1024 - 1023) = 2.0, fine. But if on a little-endian machine, the floating point handling is not little-endian and the number is interpreted as sign-bit = 0, exponent-bits = 0, mantissa = 0x40, hence (1 + 2^(-46))*2^(-1023), havoc.
I simply didn't know whether that could happen. According to http://en.wikipedia.org/wiki/Endianness#Floating-point_and_endianness it could. On the other hand, "no standard for transferring floating point values has been made. This means that floating point data written on one machine may not be readable on another", so if it breaks on weird machines, it's at least a general problem (and not Haskell's).
Oh, I misunderstood the question -- you're asking about architectures on which floating-point and fixed-point numbers use a different endianness? I don't think it's worth worrying about, unless you want to use Haskell for number crunching on a PDP-11. If you do need to implement IEEE754 parsing for unusual endians (like 3-4-1-2), parse the word yourself and then use 'wordToFloat' and friends to convert it.