
Mon, 12 Feb 2001 10:58:40 +1300, Tom Pledger
| Approach it differently. z is Double, (x+y) is added to it, so | (x+y) must have type Double.
That's a restriction I'd like to avoid. Instead: ...so the most specific common supertype of Double and (x+y)'s type must support addition.
In general there is no such thing as (x+y)'s type considered separately from this usage. The use of (x+y) as one of arguments of this addition influences the type determined for it. Suppose x and y are lambda-bound variables: then you don't know their types yet. Currently this addition determines their types: it must be the same as the type of z. With your rules the type of \x y -> x + y is not (some context) => a -> a -> a but (some context) => a -> b -> c It leads to horrible ambiguities unless the context is able to determine some types exactly (which is currently true only for fundeps).
| Why is your approach better than mine?
It used a definition of (+) which was a closer fit for the types of x and y.
But used a worse definition of the outer (+): mine was Double -> Double -> Double and yours was Int -> Double -> Double with the implicit conversion of Int to double.
Yes, I rashly glossed over the importance of having well-defined most specific common supertype (MSCS) and least specific common subtype (LSCS) operators in a subtype lattice.
They are not always defined. Suppose the following holds: Word32 `Subtype` Double Word32 `Subtype` Integer Int32 `Subtype` Double Int32 `Subtype` Integer What is the MSCS of Word32 and Int32? What is the LSCS of Double and Integer?
Anyway, since neither of us is about to have a change of mind, and nobody else is showing an interest in this branch of the discussion, it appears that the most constructive thing for me to do is return to try-to-keep-quiet-about-subtyping-until-I've-done-it-in-THIH mode.
IMHO it's impossible to do. -- __("< Marcin Kowalczyk * qrczak@knm.org.pl http://qrczak.ids.net.pl/ \__/ ^^ SYGNATURA ZASTÊPCZA QRCZAK