
This proposal is somewhat tongue in cheek, but at least it's amusing, and who knows, it might be good for something. The idea is that one could, in theory, allow both prefix unary minus and right sections of subtraction, with the type-checker deciding which is meant based on the context. Has this been noticed before? Consider the following code snippet, which runs in GHC 6.6 (with the new ability to define postfix operators).
{-# OPTIONS -fglasgow-exts -fallow-undecidable-instances -fallow-overlapping-instances #-} module Minus where import TypeCast
class Minus a b where (<->) :: a -> b
instance (Num a, TypeCast a b) => Minus a b where (<->) = typeCast . negate
instance (TypeCast a b, Num b, TypeCast b c) => Minus a (b -> c) where (<->) x y = typeCast ((typeCast x) - y)
(TypeCast is the usual one from HList etc.) Loading this in the interpreter, we now have: *Minus> (1 <->) :: Int -1 *Minus> 3 <-> 1 :: Int 2 *Minus> map (<-> 1) [2..5] [1,2,3,4] *Minus> map (1 <->) [2..5] [-1,-2,-3,-4] *Minus> map (<->) [2..5] :: [Int] [-2,-3,-4,-5] *Minus> zipWith (<->) [1,3,5] [1,2,3] [0,1,2] In short, the operator <-> can be used as *either* an infix subtraction operator (which can be sectioned on both sides) or a postfix unary negation operator (which can also be sectioned once). The trick is the same one used for variadic arguments: the type-checker can infer from context whether (1 <->) should return a number or a function, and resolves it accordingly. Thus, if we could figure out a way to define prefix operators, analogous to the way we can now define postfix operators, we could in theory allow prefix unary minus and right sections of subtraction to coexist peacefully. But I haven't had much luck coming up with a non-ugly suggestion for how to do this. I'd be interested to hear if anyone else has ideas, although I doubt any such solution would ever make it into any Haskell standard. (-: It's also unfortunate that we need the explicit type signatures in the interpreter above, but otherwise there would be no context for type-inference based on the return value. As usual, though, this problem would probably hardly ever arise in an actual program, where most values have a known type. (Actually, I still don't understand why a type signature is necessary on 3 <-> 1; anyone care to enlighten me?) Comments? Mike
participants (1)
-
Michael Shulman