Setting Default Integer and Real Types

Dear All, I am quite new to Haskell and planning to see where it will lead me in scientific computing (I will be looking into hmatrix soon). Unless there are real memory problems, I would like to make sure that all real numbers are Double and all integer numbers are Integer types. Now, I understand that Haskell in most of the cases is smart enough to infer the type of a variable (integer, real, etc...), but can I also set its default precision once for all? I.e. if I say a=5.2 I want a to be used as a double precision real everywhere in the code. Cheers Lorenzo

On Wed, Sep 08, 2010 at 12:58:48PM +0200, Lorenzo Isella wrote:
Dear All, I am quite new to Haskell and planning to see where it will lead me in scientific computing (I will be looking into hmatrix soon). Unless there are real memory problems, I would like to make sure that all real numbers are Double and all integer numbers are Integer types. Now, I understand that Haskell in most of the cases is smart enough to infer the type of a variable (integer, real, etc...), but can I also set its default precision once for all? I.e. if I say a=5.2 I want a to be used as a double precision real everywhere in the code.
Sure, just give a type signature: a :: Double a = 5.2 -Brent

On Wednesday 08 September 2010 12:58:48, Lorenzo Isella wrote:
Dear All, I am quite new to Haskell and planning to see where it will lead me in scientific computing (I will be looking into hmatrix soon). Unless there are real memory problems, I would like to make sure that all real numbers are Double and all integer numbers are Integer types. Now, I understand that Haskell in most of the cases is smart enough to infer the type of a variable (integer, real, etc...), but can I also set its default precision once for all?
Haskell has default declarations http://www.haskell.org/onlinereport/haskell2010/haskellch4.html#x10-790004.3... for that. The default default is default (Integer, Double) which means that if possible, an ambiguous number type is instantiated as Integer, if that's not possible (due to a Fractional or Floating constraint for example), Double is tried. If neither is possible, you get a compile error (ambiguous type variable ...).
I.e. if I say a=5.2 I want a to be used as a double precision real everywhere in the code.
There's a slight catch in that. The inferred type of a binding a = 5.2 is a :: Fractional n => n and the binding is really a = fromRational (26 % 5) Depending on whether the monomorphism restriction applies and the optimisation level, it could be that without type signature, the call to fromRational is evaluated at each use, which can slow down performance. But basically, you get your desired behaviour automatically. However, it's good practice to use a lot of type signatures although the compiler's type inference makes that not necessary (apart from the documentational effect of type signatures, specifying a monomorphic type instead of the inferred polymorphic type can give significant performance gains).
Cheers
Lorenzo
participants (3)
-
Brent Yorgey
-
Daniel Fischer
-
Lorenzo Isella