
(Resurrecting a somewhat old thread...) On Fri, Jan 28, 2005 at 08:16:59PM +0100, Henning Thielemann wrote:
On Fri, 28 Jan 2005, Chung-chieh Shan wrote:
But I would hesitate with some of your examples, because they may simply illustrate that mathematical notation is a language with side effects -- see the third and fifth examples below. I can't imagine mathematics with side effects, because there is no order of execution.
Not all side effects require an order of execution. For instance, dependence on the environment is a side effect (in the sense that it is related to a monad), but it does not depend on the order of execution. There are many other examples too, like random variables.
O(n) which should be O(\n -> n) (a remark by Simon Thompson in The Craft of Functional Programming)
I don't think this can be right. Ken argued around this point, but here's a more direct argument: in f(x) = x + 1 + O(1/x) all the 'x's refer to the same variable; so you shouldn't go and capture the one inside the 'O'. This is established mathematical notation, it's very useful, and can be explained almost coherently. The one deficiency is that we should interpret 'O' as "an asymptotically bounded function of..." but that doesn't say what it is a function of and where we should take the asymptotics. But the patch you suggest doesn't really help.
But what do you mean with 1/O(n^2) ? O(f) is defined as the set of functions bounded to the upper by f. So 1/O(f) has no meaning at the first glance. I could interpret it as lifting (1/) to (\f x -> 1 / f x) (i.e. lifting from scalar reciprocal to the reciprocal of a function) and then as lifting from a reciprocal of a function to the reciprocal of each function of a set. Do you mean that?
I think this is the only reasonable generalization from the established usage of, e.g., 2^(O(n)). In practice, this means that 1/O(n^2) is the set of functions asymptotically bounded below by 1/kn^2 for some k.
Hm, (3+) is partial application, a re-ordered notation of ((+) 3), which is only possible if the omitted value is needed only once. But I see people writing f(.) + f(.-t) and they don't tell, whether this means
(\x -> f x) + (\x -> f (x-t))
or
(\x -> f x + f (x-t))
Have you really seen people use that notation with either of those meanings? That's really horrible and inconsistent. I would have interpreted f(.) + f(.-t) as \x \y -> f(x) + f(y-t) to be consistent with notation like .*. , which seems to mean \x \y -> x*y in my experience.
It seems to me that the dot is somehow more variable than variables, and a dot-containing expression represents a function where the function arguments are inserted where the dots are.
Right. I don't know how to formalize this, but that doesn't mean it can't be done. Peace, Dylan