Why are the types of the result different?

testWhere1 = a where b = 1 a = b+1
testWhere1 2
testWhere2 = a where b = 1 c = 1/(b - b) a = b+1
testWhere2 2.0
The same thing happens with let. testLet1 = let a = b+1 b = 1 in a
testLet1 2
testLet2 = let a = b+1 c = 1/(b - b) b = 1 in a
testLet2 2.0
-- Russ Abbott ______________________________________ Professor, Computer Science California State University, Los Angeles Google voice: 424-242-USA0 (last character is zero) blog: http://russabbott.blogspot.com/ vita: http://sites.google.com/site/russabbott/ ______________________________________

On Wed, Sep 29, 2010 at 06:19:33PM -0700, Russ Abbott wrote:
testWhere1 = a where b = 1 a = b+1
testWhere1 2
testWhere2 = a where b = 1 c = 1/(b - b) a = b+1
testWhere2 2.0
Haskell infers the types of a, b, and c *statically*, at compile time. In the first example, a and b are simply constrained to be any numeric type. By the type defaulting mechanism, their type is defaulted to Integer. In the second example, the division operator (/) only operates on fractional numeric types, so (b - b) must also have a fractional type. Since the arguments and result of subtraction all must have the same type, b must also have a fractional type, and hence so must a since it is defined by addition on b. Types which are constrained to be fractional but not otherwise constrained are defaulted to Double. Note that this all happens statically at compile-time, so the fact that c is never evaluated is not relevant. -Brent
participants (2)
-
Brent Yorgey
-
Russ Abbott