Hi Matt,
On 2019-02-27 4:31 PM, Matt Peddie wrote:
For
me [gcc] returns 32767 if I enable optimizations and 0 if I don't.
Yes, I get 0 without optimizations. Looking at the generated
assembly (with gcc -S) I can see that the non-optimized code is
converting the double to a 32-bit integer using a hardware
instruction (cvttsd2si) but in the optimized case it's using a
precomputed value. It's strange that the precomputed value doesn't
match, but as Ian said, all bets are off anyway.
According to one authority,
the conversion produces the 32-bit value 0x80000000. When truncated
to int16_t this is 0.
Clang
returns 0 independent of any flag changes I tried.
Clang's code uses a precomputed value even without optimization.
Looking at GHC's generated core, it converts the Double to an Int
and then narrows down to Int16 after that.
—Neil