
Jens Blanck
I was wondering if someone could give me some references to when and why the choice was made to default integral numerical literals to Integer rather than to Int in Haskell.
My guess is precision: some numeric calculations (even doing a round on some Double values) will be too large for Int values (at least on 32bit). Note that unlike Python, etc. Haskell doesn't allow functions like round to choose between Int and Integer (which is equivalent to the long type in Python, etc.).
I'd like to use this information to make an analogous case for defaulting real numerical literals (well, the literals are likely to be in scientific notation, i.e., floating point) to some data type of computable reals rather than to floating point Double.
The difference here is performance: under the hood, Integer values which can be expressed as an Int _are_ stored as an Int (IIUC anyway); however computable reals are almost always inefficient. -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com