
Hi cafe,
It's been my general impression that when neither Haskell nor C
defines behavior in a particular situation, the behavior nonetheless
matches. I was surprised to observe
Prelude Data.Int> round (4294967295 :: Double) :: Int16
-1
when
#include

The relevant bit of the Haskell report:
18.1 Signed integer types
This module provides signed integer types of unspecified width (Int) and fixed widths (Int8, Int16, Int32 and Int64). All arithmetic is performed modulo 2^n, where n is the number of bits in the type.
Link: https://www.haskell.org/onlinereport/haskell2010/haskellch18.html#x26-223000... The module 2^n bit is ambiguous to me, but it seems it's just doing 2's complement, which is what the underlying machine does: (maxBound :: Int8) + 1 -128 The C standard, by contrast, states that the behavior of signed-arithmetic overflow is *undefined*, which is to say there are absolutely *NO* constraints on what the compiler can do. Modern compilers use this fact to assume this will never happen when optimizing. I actually get different numbers with your C example depending on what optimization flags I pass to the compiler. Worth noting, this is a huge source of security vulnerabilities in C & C++. -Ian Quoting Matt Peddie (2019-02-27 17:41:43)
Hi cafe,
It's been my general impression that when neither Haskell nor C defines behavior in a particular situation, the behavior nonetheless matches. I was surprised to observe
Prelude Data.Int> round (4294967295 :: Double) :: Int16 -1
when
#include
#include int main(void) { double d = 4294967295; int16_t r = (int16_t) d; printf("%"PRId16"\n", r); return 0; } yields 0 when compiled and run.
As far as I can tell, neither language defines what this result should be, so neither is doing anything wrong here. But I was surprised that they differ; does anyone know why Haskell's rounding operation behaves the way it does (maybe there is some historical reason)? Or can someone perhaps point me to a standards document I missed that states how the language must round out-of-bounds inputs?
Regards
Matt Peddie _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

On my machine, your program outputs 32767 with GCC (7.4.0) and 0 with Clang (5.0.2).

Hi Neil,
Did you call GCC with optimizations enabled? For me it returns 32767
if I enable optimizations and 0 if I don't. Clang returns 0
independent of any flag changes I tried. Bummer!
Regards
Matt
On Thu, Feb 28, 2019 at 9:19 AM Neil Mayhew
On my machine, your program outputs 32767 with GCC (7.4.0) and 0 with Clang (5.0.2). _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Hi Matt, On 2019-02-27 4:31 PM, Matt Peddie wrote:
For me [gcc] returns 32767 if I enable optimizations and 0 if I don't.
Yes, I get 0 without optimizations. Looking at the generated assembly (with gcc -S) I can see that the non-optimized code is converting the double to a 32-bit integer using a hardware instruction (cvttsd2si) but in the optimized case it's using a precomputed value. It's strange that the precomputed value doesn't match, but as Ian said, all bets are off anyway. According to one authority https://www.felixcloutier.com/x86/cvttsd2si, the conversion produces the 32-bit value 0x80000000. When truncated to int16_t this is 0.
Clang returns 0 independent of any flag changes I tried.
Clang's code uses a precomputed value even without optimization. Looking at GHC's generated core, it converts the Double to an Int and then narrows down to Int16 after that. —Neil
participants (3)
-
Ian Denhardt
-
Matt Peddie
-
Neil Mayhew