
Hello,
On Fri, 28 Jan 2005 10:01:33 -0500, Jacques Carette
The previous post on record syntax reminded me of some 'problems' I had noticed where Haskell and mathematics have a (deep) usage mismatch. It is interesting that my sentiment is exactly the opposite. Mathematicians (the applied kind, not the pure) tend to be quite sloppy about their notation relaying _a lot_ on the context and the fact that people know what they are talking about. I find that this makes their explanations harder to understand rather than easier.
First, consider a syntax for other component-wise function application? For example, it would be convenient to have (f,g) @ (x,y) be (f x, g y). In some languages [with dynamic typing], one can even do (f,g) (x,y) :-) I am not sure what you mean, but you can define exactly that (except for the @ sign which is special syntax already): (f,g) `ap2` (x,y) = (f x, g y)
Yes, I am aware that this untypeable in Haskell, because polymorphism is straight-jacketed by structural rules. What do you mean by this: ap2 :: (a -> b, c - d) -> (a,c) -> (b,d)
But
in mathematics, it is convenient and extremely common to: 1) look at the type a -> b as being a *subtype* of b (though I have never seen it phrased that way in print) I don't think subtyping is what you are thinking of. Perhaps you are referring to the correspondance between b-expressions with a free a-variable, and functions a->b. In Haskell we don't have expressions with free variables as first class values. There might be some interesting things to work out for such beasts.
2) to consider every 'natural transformation' available and use it (tuples of functions <-> obvious function on tuples) Are you suggesting that natural transformations should be applied automatically? There are many problems with this: which operations should happen automatically? Type inference (if it works at all) will be hard to do, but perhaps these are technical details. Fundamentally the "sloppy" math notation assumes that what the author wrote is correct and should simply be given "the obvious" meaning. This is pretty much the idea with dynamic typing. I find the restrictions of static typing very convenient (well most of the time :-) as they capture "stupid' errors.
Seeing a -> b as a subtype of b allows one to take any function f : b -> b -> c (like + ) and 'lift' it to ff : (a -> b) -> (a -> b) -> (a -> c) via the obvious definition ff g h = \x -> f (g x) (h x) This is done routinely in mathematics. The meaning of the expression (sin + cos) should be immediately obvious. Such 'lifting' is used even in first-year calculus courses, but not in Haskell.
Apart from the fact that this does not seem like a good idea, you can already do things like this in Haskell: liftOp op f g x = f x `op` f x This is all you need, especially if you are prepared to stick to the "usual" mathematical convention of not using curried functions, but instead using tuples to pass multiple arguments.
The "same" phenomenon is what allows one to 'see' that there is an obvious function apply_tuple:: (a->b,c->d) -> (a,c) -> (b,d) so that (f,g) (a,b) only has one possible meaning, if the built-in apply function were to be overloaded.
There are many interesting things one could do if application was to be overloaded but this hardly seems like a motivating example. You can already use the 'ap2' combinator from above. I can see that it is annoying to have ap2, ap3, etc. You can define a generic "ap_n" using dependent types, or using the encoding tomasz posted, but that hardly seems to be a "deep" problem.
Similarly, there are "obvious" maps apply_T :: T (a->b) -> a -> T b for any datatype constructor T (as has been noticed by all who do 'generic' programming).
Is this really true? Consider: data T a = T (a -> Int) What shoud apply_T do? applyT :: (essentially) ((a -> b) -> Int) -> a -> (b -> Int) perhaps: applyT (T f) a = T (\b -> f (const b)) but that probably does not behave as you would like it. Of course you could restrict the datatypes for which apply_T can be derived. This means that
[f, g, h] x == [f x, g x, h x] but also (in mathematics at least) {f, g, h} x == {f x, g x, h x} where {} denotes a set, and the equality above is extensional equality, and so on. I agree that it would be nice to automatically derive functions such as apply_T, etc. This (as you point out) is the idea of generic programming. This however does not suggest that we should replace application...
Note that some computer algebra systems use #1 and #2 above all the time to define the operational meaning of a lot of syntactic constructs [where I personally extended the use of such rules in the implementation & operational semantics of Maple]. What hope is there to be able to do something 'similar' in a Haskell-like language? I think it is already possible to do simillar things. I am sure there is room for improvement, but it is important that the consequences of changing a language are worked out in details. It is time for me to go to school :-) bye Iavor