
Fri, 9 Feb 2001 17:29:09 +1300, Tom Pledger
(x + y) + z
we know from the explicit type signature (in your question that I was responding to) that x,y::Int and z::Double. Type inference does not need to treat x or y up, because it can take the first (+) to be Int addition. However, it must treat the result (x + y) up to the most specific supertype which can be added to a Double.
Approach it differently. z is Double, (x+y) is added to it, so (x+y) must have type Double. This means that x and y must have type Double. This is OK, because they are Ints now, which can be converted to Double. Why is your approach better than mine?
| h f = f (1::Int) == (2::Int) | Can I apply f
h?
Sure, sorry.
h:: (Subtype a b, Subtype Int b, Eq b) => (Int -> a) -> Bool
This type is ambiguous: the type variable b is needed in the context but not present in the type itself, so it can never be determined from the usage of h.
That can be inferred by following the structure of the term. Function terms do seem prone to an accumulation of deferred subtype constraints.
When function application generates a constraint, the language gets ambiguous as hell. Applications are found everywhere through the program! Very often the type of the argument or result of an internal application does not appear in the type of the whole function being defined, which makes it ambiguous. Not to mention that there would be *LOTS* of these constraints. Application is used everywhere. It's important to have its typing rule simple and cheap. Generating a constraint for every application is not an option. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTÊPCZA QRCZAK