
On Sun, Mar 23, 2014 at 4:50 AM, Bart Massey
Even worse, the same sort of thing happens when trying to add a `Data.Word.Word` to an `Integer`. This is a totally safe conversion if you just let the result be `Integer`.
Because of that type inference works better. It can flow both forward and backwards through (+), whereas the approach you advocate is strictly less powerful. You have to give up overloading of numeric literals, and in essence this gives up on the flexibility of the numerical tower to handle open sets of new numerical types.
You obviously know far more about this than me, but I'm not seeing it? AFAICT all I am asking for is numeric subtyping using the normal typeclass mechanism, but with some kind of preference rules that get the subtyping right in "normal" cases? I can certainly agree that I don't want to go toward C's morass of "widening to unsigned" (???) or explicitly-typed numeric literals. I just want a set of type rules that agrees with grade-school mathematics most of the time. I'm sure I'm missing something, and it really is that hard, but if so it makes me sad.
Let's consider what directions type information flows though (+) class Foo a where (+) :: a -> a -> a Given that kind of class you get (+) :: Foo a => a -> a -> a Now given the result type of an expression you can know the types of both of its argument types. If either argument is determined you can know the type of the other argument and the result as well. Given any one of the arguments or the result type you can know the type of the other two. Consider now the kind of constraint you'd need for widening. class Foo a b c | a b -> c where (+) :: a -> b -> c Now, given both a and b you can know the type of c. But given just the type of c, you know nothing about the type of either of its arguments. a + b :: Int used to tell you that a and b both had to be Int, but now you'd get nothing! Worse, given the type of one argument, you don't get to know the type of the other argument or the result. We went from getting 2 other types from 1 in all directions to getting 1 type from 2 others, in only 1 of 3 cases. The very cases you complained about, where you needed defaulting now happen virtually everywhere!
* The multiplicity of exponentiation functions, one of which looks exactly like C's XOR operator, which I've watched trip up newbies a bunch of times. (Indeed, NumericPrelude seems to have added more of these, including the IMHO poorly-named (^-) which has nothing to do with numeric negation as far as I can tell. See "unary negation" above.) It is unfortunate, but there really is a distinction being made.
I get that. I even get that static-typing exponentiation is hard. (You should see how we did it in Nickle (http://nickle.org) --not because it's good but because it calls out a lot of the problems.) What I don't get is why the names seem so terrible to me, nor why the typechecker can't do more to help reduce the number of needed operators, ideally to one. It might mean extra conversion operators around exponentiation once in a while, I guess?
I do think there were at least two genuinely bad ideas in the design of (^) (^^) and (**) originally. Notably the choice in (^) an (^^) to overload on the power's Integral type. More often than not this simply leads to an ambiguous choice for simple cases like x^2. Even if that had that been monomorphized and MPTCs existed when they were defined AND we wanted to use fundeps in the numerical tower, you'd still need at least two operators. You can't overload cleanly between (^) and (**) because instance selection would overlap. Both of their right sides unify: (^) :: ... => a -> Int -> a (**) :: ... => a -> a -> a
You can of course use
:set -Wall -fno-warn-type-defaults
Of course. I have no idea what warnings
I might be turning off that would actually be useful?
The warnings that turns off are just the ones like your discard and the 3 == 3 where it had to turn to defaulting for an under-determined type. -Edward