
It is still somewhat strange. For a discrete function f(i) I can compute
What is striking me is that in calculus I can often symbolically compute
Just surveying this thread, it appears a bunch of issues are being mixed-up: (1) the distinction between continuous and discrete functions, and the extent to which the latter serves as an approximation of the former Derivative IS-TO antiderivative IS-TO integral AS (finite) difference IS-TO "anti-difference" IS-TO (discrete) summation. (2) the FP notion of closure I'll respond to a small slice of (1) and most of (2). (1) the definite integral F(b) - F(a) but I cannot compute F(a) or F(b) themselves. Right? In the continuous case, the antiderivative is defined /up to an additive constant/. In calculating the definite integral, the constant gets cancelled out because it is the same on both sides of the subtraction. the antiderivative and I get a simple function, and I can get the value of F for a given x and I get a simple number. Why is that so? So no, you don't get a simple number. It is ambiguous to evaluate the antiderivative at a point. Unless you set down an arbitrary rule such as: the antiderivative must pass through the origin, i.e. F(0)=0. In the discrete case, you must first fully define which of forward / backward / central difference you're adopting. AND adopt some arbitrary rule to deal with the additive constant. Finally, you can define the anti-difference F(x) of f(x) appropriately to obtain the equation you desire: F(b)-F(a)=sum of f from a to b inclusive.
however for the antiderivative I have to look at all values between the lowest possible x and the running x. If the function is discrete but has no lower bound for x, then I cannot compute an antiderivative at all, at least not one which will be correct for any x.
The antiderivative F of a function f::Int->Int needs to have the property
Using the F(0)=0 rule, you'll be summing /about the origin/. So you'd avoid nastiness like having to sum f(x) starting from "the lowest possible x". (2) that F(b) - F(a) must be the sum of f within [a,b]. To do this I must know all values withib [a,b]. But at the time I compute the antiderivative I do not know this interval yet. It's easy to write an integrator :: (Int -> Int) -> Int -> Int -> Int, where integrator takes a function f, and bounds of the interval a and b. By partially applying to a particular function f1, we get a function Int -> Int -> Int which integrates f1 given whatever bounds. The latter can be further partially applied with the lower bound fixed at 0 to obtain a function Int -> Int, which sums f1 from 0 to the given number. On the other hand, we can fix the bounds [a,b] to obtain a specialized integrator: (Int -> Int) -> Int, that varies over the /function/ rather than over the /interval/. Partial application (Schemers, read "closure") makes this all possible. p.s. Everyone, please "Reply to All" to make sure your email gets to the list reflector at haskell.org. Otherwise your responses are private to Martin and you lose out on the aspect of community. -- Kim-Ee