On 1 April 2010 10:53, Ivan Lazar Miljenovic <ivan.miljenovic@gmail.com> wrote:
Jens Blanck <jens.blanck@gmail.com> writes:
> I was wondering if someone could give me some references to when and why the
> choice was made to default integral numerical literals to Integer rather
> than to Int in Haskell.

My guess is precision: some numeric calculations (even doing a round on
some Double values) will be too large for Int values (at least on
32bit).  Note that unlike Python, etc. Haskell doesn't allow functions
like round to choose between Int and Integer (which is equivalent to the
long type in Python, etc.).
 
Ints have perfect precision as long as you remember that it implements modulo arithmetic for some power of 2. I was hoping that the reason would be that Integers give more users what they expect, namely integers, instead of something where you can add two positive numbers and wind up with a negative number. The type for round is (Fractional a, Integral b) => a -> b, so that can used to give Integer or Int as you like.

> I'd like to use this information to make an analogous case for defaulting
> real numerical literals (well, the literals are likely to be in scientific
> notation, i.e., floating point) to some data type of computable reals rather
> than to floating point Double.

The difference here is performance: under the hood, Integer values which
can be expressed as an Int _are_ stored as an Int (IIUC anyway); however
computable reals are almost always inefficient.
 
Yes, the cost for computable reals will be an order of magnitude or possibly two for well-behaved computations. For not well-behaved problems it will be much worse, but it won't return nonsense either. Also consider that the difference between Integers and unboxed Ints is also quite big. I'll happily to take the hit.