is proof by testing possible?

Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition. -- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point? For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set. Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness. Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :) /Joe On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru

For example, it is possible to prove correctness of a function
"negatedHead :: [Bool] -> Bool" by testing it on "True:undefined" and
"False:undefined".
2009/10/12 Eugene Kirpichov
It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
: In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru

Also google "seemingly impossible functional programs".
2009/10/12 Eugene Kirpichov
For example, it is possible to prove correctness of a function "negatedHead :: [Bool] -> Bool" by testing it on "True:undefined" and "False:undefined".
2009/10/12 Eugene Kirpichov
: It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
: In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru

Oh- thanks for the example, I suppose you can disregard my other message. I suppose this is a bit like short-circuiting. No? On Oct 12, 2009, at 1:56 PM, Eugene Kirpichov wrote:
For example, it is possible to prove correctness of a function "negatedHead :: [Bool] -> Bool" by testing it on "True:undefined" and "False:undefined".
2009/10/12 Eugene Kirpichov
: It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
: In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru

What do you mean under short-circuiting here, and what is the connection?
The property that allows to deduce global correctness from correctness
on under-defined inputs is just continuity in the topological sense.
2009/10/12 Joe Fredette
Oh- thanks for the example, I suppose you can disregard my other message.
I suppose this is a bit like short-circuiting. No?
On Oct 12, 2009, at 1:56 PM, Eugene Kirpichov wrote:
For example, it is possible to prove correctness of a function "negatedHead :: [Bool] -> Bool" by testing it on "True:undefined" and "False:undefined".
2009/10/12 Eugene Kirpichov
: It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
: In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context:
http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru

I mean that, like in the definition of `||` True || _ = True False || x = x If you generalize this to `or`-ing a list of inputs, eg: foldr (||) False list_of_bools you can 'short-circuit' the test as soon as you find a 'True' value. This is actually not the greatest example, since you can't actually test it in finite number of tests, but you can test "half" the function by testing a that lists like [True:undefined] or [False, False, False, ... , True , undefined] return "True". It's "short-circuiting" in the sense that it (like the `||` function) doesn't need to see (necessarily) all of it's arguments to return a correct result. /Joe On Oct 12, 2009, at 2:11 PM, Eugene Kirpichov wrote:
What do you mean under short-circuiting here, and what is the connection? The property that allows to deduce global correctness from correctness on under-defined inputs is just continuity in the topological sense.
2009/10/12 Joe Fredette
: Oh- thanks for the example, I suppose you can disregard my other message.
I suppose this is a bit like short-circuiting. No?
On Oct 12, 2009, at 1:56 PM, Eugene Kirpichov wrote:
For example, it is possible to prove correctness of a function "negatedHead :: [Bool] -> Bool" by testing it on "True:undefined" and "False:undefined".
2009/10/12 Eugene Kirpichov
: It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
: In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context:
http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru
-- Eugene Kirpichov Web IR developer, market.yandex.ru

Really? How? That sounds very interesting, I've got a fair knowledge of basic topology, I'd love to see an application to programming... On Oct 12, 2009, at 1:55 PM, Eugene Kirpichov wrote:
It is possible for functions with compact domain, not just finite.
2009/10/12 Joe Fredette
: In general? No- If we had an implementation of the `sin` function, how can testing a finite number of points along it determine if that implementation is correct for every point?
For specific functions (particularly those with finite domain), it is possible. If you know the 'correct' output of every input, then testing each input and ensuring correct output will work. Consider the definition of the `not` function on booleans. The domain only has two elements (True and False) and the range has only two outputs (True and False), so if I test every input, and insure it maps appropriately to the specified output, we're all set.
Basically, if you can write your function as a big case statement that covers the whole domain, and that domain is finite, then the function can be tested to prove it's correctness.
Now, I should think the Muad'Dib would know that, perhaps you should go back to studying with the Mentats. :)
/Joe
On Oct 12, 2009, at 1:42 PM, muad wrote:
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
-- View this message in context: http://www.nabble.com/is-proof-by-testing-possible--tp25860155p25860155.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
-- Eugene Kirpichov Web IR developer, market.yandex.ru

Joe Fredette wrote:
Really? How? That sounds very interesting, I've got a fair knowledge of basic topology, I'd love to see an application to programming...
Compactness is one of the most powerful concepts in mathematics, because on the one hand it makes it possible to reduce many infinite problems to finite ones (inherent in its definition: for every open cover there is a finite subcover), on the other hand it is often easy to prove compactness due to Tychonoff's theorem (any product of compact spaces is compact). The connection to computing science is very nicely explained in http://math.andrej.com/2007/09/28/seemingly-impossible-functional-programs/ I found this paragraph particularly enlightening: """
modulus :: (Cantor -> Integer) -> Natural modulus f = least(\n -> forevery(\a -> forevery(\b -> eq n a b --> (f a == f b))))
This [...] finds the modulus of uniform continuity, defined as the least natural number `n` such that `forall alpha,beta. alpha =_n beta implies f(alpha)=f(beta),` where `alpha =_n beta iff forall i< n. alpha_i = beta_i.` What is going on here is that computable functionals are continuous, which amounts to saying that finite amounts of the output depend only on finite amounts of the input. But the Cantor space is compact, and in analysis and topology there is a theorem that says that continuous functions defined on a compact space are uniformly continuous. In this context, this amounts to the existence of a single `n` such that for all inputs it is enough to look at depth `n` to get the answer (which in this case is always finite, because it is an integer). """ Cheers

That has got to be the single awesomest thing I have ever seen ever... The first time I tried to read through the "Seemingly Impossible Functional Programs" post, I understood none of it. Now it seems obviously. I Love Math... Thanks for the explanation! /Joe On Oct 12, 2009, at 3:22 PM, Ben Franksen wrote:
Joe Fredette wrote:
Really? How? That sounds very interesting, I've got a fair knowledge of basic topology, I'd love to see an application to programming...
Compactness is one of the most powerful concepts in mathematics, because on the one hand it makes it possible to reduce many infinite problems to finite ones (inherent in its definition: for every open cover there is a finite subcover), on the other hand it is often easy to prove compactness due to Tychonoff's theorem (any product of compact spaces is compact). The connection to computing science is very nicely explained in
http://math.andrej.com/2007/09/28/seemingly-impossible-functional-programs/
I found this paragraph particularly enlightening:
"""
modulus :: (Cantor -> Integer) -> Natural modulus f = least(\n -> forevery(\a -> forevery(\b -> eq n a b --> (f a == f b))))
This [...] finds the modulus of uniform continuity, defined as the least natural number `n` such that `forall alpha,beta. alpha =_n beta implies f(alpha)=f(beta),` where `alpha =_n beta iff forall i< n. alpha_i = beta_i.`
What is going on here is that computable functionals are continuous, which amounts to saying that finite amounts of the output depend only on finite amounts of the input. But the Cantor space is compact, and in analysis and topology there is a theorem that says that continuous functions defined on a compact space are uniformly continuous. In this context, this amounts to the existence of a single `n` such that for all inputs it is enough to look at depth `n` to get the answer (which in this case is always finite, because it is an integer). """
Cheers
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Joe Fredette wrote:
That has got to be the single awesomest thing I have ever seen ever...
I was dumbfounded, too, when I first read about this. BTW, and completely off-topic: if you like this, you might have fun too with Conor McBride's discovery that data types can be differentiated, and the result is even useful: it corresponds to what is known (to some) as a Zipper: http://www.cs.nott.ac.uk/~ctm/diff.pdf Moreover, we can also give a sensible and useful interpretation to finite difference quotients of types: http://blog.sigfpe.com/2009/09/finite-differences-of-types.html which McBride calls "dissections" and discusses in some depth here: http://strictlypositive.org/CJ.pdf What is most astonishing to me is that these constructions work even though there is neither subtraction nor division defined on data types, only addition and multiplication (there are neutral elements for both, but not necessarily inverses). Cheers Ben

I read about differentiating and differences for types, that was awesome, but when I read the seemingly-impossible post the first time, I didn't have enough background in topology to understand what was going on. Differentiating types is usually how I introduce the power of haskell's type system to the other students in my Math department. I actually showed my algebra professor how non-recursive datatypes in n-variables are just multinomials over a commutative (up to isomorphism) ring and how you can use that to reason about equivalences between types based on the polynomial they're isomorphic too. My Algebra Professor, who does not program and knows very little about it, sat back in his chair and say, "So that's what all the fuss is about? Programming is easy!" Really, all this goes to show that a rich, algebraic type system like Haskell's is invaluable because it harnesses the power of mathematics that we've been working at for years. Free Theorems indeed! On Oct 12, 2009, at 4:45 PM, Ben Franksen wrote:
Joe Fredette wrote:
That has got to be the single awesomest thing I have ever seen ever...
I was dumbfounded, too, when I first read about this.
BTW, and completely off-topic: if you like this, you might have fun too with Conor McBride's discovery that data types can be differentiated, and the result is even useful: it corresponds to what is known (to some) as a Zipper:
http://www.cs.nott.ac.uk/~ctm/diff.pdf
Moreover, we can also give a sensible and useful interpretation to finite difference quotients of types:
http://blog.sigfpe.com/2009/09/finite-differences-of-types.html
which McBride calls "dissections" and discusses in some depth here:
http://strictlypositive.org/CJ.pdf
What is most astonishing to me is that these constructions work even though there is neither subtraction nor division defined on data types, only addition and multiplication (there are neutral elements for both, but not necessarily inverses).
Cheers Ben
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Mon, Oct 12, 2009 at 10:42 AM, muad
Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
not True==False not False==True Done. Tested :-) Less trivially, consider a function of signature swap :: (a,b) -> (b,a) We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.) But consider: swap :: (a,a) -> (a,a) If I find that swap (1,2) == (2,1) then I know that swap (x,y)==(y,x) for all types a and b. We only need one test. The reason is that we have a free theorem that says that for all functions, f, of type (a,a) -> (a,a) this holds: f (g a,g b) == let (x,y) = f (a,b) in (g x',g y') For any x and y define g 1 = x g 2 = y Then f(x,y) == f (g 1,g 2) == let (x',y') == f(1,2) in (g x',g y') == let (x',y') == (2,1) in (g x',g y') == (g 2,g 1) == (y,x) In other words, free theorems can turn an infinite amount of testing into a finite test. (Assuming termination.) -- Dan

I completely forgot about free theorems! Do you have some links to resources -- I tried learning about them a while ago, but couldn't get a grasp on them... Thanks. /Joe On Oct 12, 2009, at 2:00 PM, Dan Piponi wrote:
On Mon, Oct 12, 2009 at 10:42 AM, muad
wrote: Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
not True==False not False==True
Done. Tested :-)
Less trivially, consider a function of signature
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
But consider: swap :: (a,a) -> (a,a)
If I find that swap (1,2) == (2,1) then I know that swap (x,y)==(y,x) for all types a and b. We only need one test. The reason is that we have a free theorem that says that for all functions, f, of type (a,a) -> (a,a) this holds:
f (g a,g b) == let (x,y) = f (a,b) in (g x',g y')
For any x and y define
g 1 = x g 2 = y
Then f(x,y) == f (g 1,g 2) == let (x',y') == f(1,2) in (g x',g y') == let (x',y') == (2,1) in (g x',g y') == (g 2,g 1) == (y,x)
In other words, free theorems can turn an infinite amount of testing into a finite test. (Assuming termination.) -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Joe,
On Mon, Oct 12, 2009 at 11:03 AM, Joe Fredette
wrote: I completely forgot about free theorems! Do you have some links to resources -- I tried learning about them a while ago, but couldn't get a grasp on them... Thanks.
There is Wadler's paper but I do find it tricky: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.9875 You can play with the generator here: http://linux.tcs.inf.tu-dresden.de/~voigt/ft/ The results can look unreadable at first, but I find that if I copy them onto paper and do all of the remaining substitutions manually (like I had to do with (a,b) -> (b,a)) you end up with something readable. If you keep doing this you'll get a good intuition for what the free theorem for a type will look like. -- Dan

I'm making the same mistake repeatedly. In both my mails there are
places where I said (a,b) or (b,a) when I meant (a,a).
--
Dan
On Mon, Oct 12, 2009 at 11:09 AM, Dan Piponi
Joe,
On Mon, Oct 12, 2009 at 11:03 AM, Joe Fredette
wrote: I completely forgot about free theorems! Do you have some links to resources -- I tried learning about them a while ago, but couldn't get a grasp on them... Thanks. There is Wadler's paper but I do find it tricky: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.38.9875
You can play with the generator here: http://linux.tcs.inf.tu-dresden.de/~voigt/ft/ The results can look unreadable at first, but I find that if I copy them onto paper and do all of the remaining substitutions manually (like I had to do with (a,b) -> (b,a)) you end up with something readable. If you keep doing this you'll get a good intuition for what the free theorem for a type will look like. -- Dan

Do you know any category theory? What helped me finally grok free theorems is that in the simplest cases, the free theorem for a polymorphic function is just a naturality condition. For example, the free theorem for flatten :: Tree a -> [a] is precisely the statement that flatten is a natural transformation from the Tree functor to the list functor: fmap_[] g . flatten == flatten . fmap_Tree g It gets more complicated than this, of course, but that's the basic idea. -Brent On Mon, Oct 12, 2009 at 02:03:11PM -0400, Joe Fredette wrote:
I completely forgot about free theorems! Do you have some links to resources -- I tried learning about them a while ago, but couldn't get a grasp on them... Thanks.
/Joe
On Oct 12, 2009, at 2:00 PM, Dan Piponi wrote:
On Mon, Oct 12, 2009 at 10:42 AM, muad
wrote: Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
not True==False not False==True
Done. Tested :-)
Less trivially, consider a function of signature
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
But consider: swap :: (a,a) -> (a,a)
If I find that swap (1,2) == (2,1) then I know that swap (x,y)==(y,x) for all types a and b. We only need one test. The reason is that we have a free theorem that says that for all functions, f, of type (a,a) -> (a,a) this holds:
f (g a,g b) == let (x,y) = f (a,b) in (g x',g y')
For any x and y define
g 1 = x g 2 = y
Then f(x,y) == f (g 1,g 2) == let (x',y') == f(1,2) in (g x',g y') == let (x',y') == (2,1) in (g x',g y') == (g 2,g 1) == (y,x)
In other words, free theorems can turn an infinite amount of testing into a finite test. (Assuming termination.) -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Sadly not enough, I understand the basics, but the whole "proof <=> this diagram commutes" thing still seems like voodoo to me. There is a section coming up in my Topology ISP that will be on CT. So I hope that I will be able to gain some purchase on the subject, at least enough to build up a working understanding on my own. I have a practical understanding of Functors and Natural Transformations, so working a bit with these free theorem things is kind of fun. Actually, another germane-if-random question, why isn't there a natural transformation class? Something like: class Functor f, Functor g => NatTrans g f a where trans :: f a -> g a So your flatten function becomes a `trans` a la instance NatTrans Tree [] a where trans = flatten In fact, I'm going to attempt to do this now... Maybe I'll figure out why before you reply. :) /Joe On Oct 12, 2009, at 8:41 PM, Brent Yorgey wrote:
Do you know any category theory? What helped me finally grok free theorems is that in the simplest cases, the free theorem for a polymorphic function is just a naturality condition. For example, the free theorem for
flatten :: Tree a -> [a]
is precisely the statement that flatten is a natural transformation from the Tree functor to the list functor:
fmap_[] g . flatten == flatten . fmap_Tree g
It gets more complicated than this, of course, but that's the basic idea.
-Brent
On Mon, Oct 12, 2009 at 02:03:11PM -0400, Joe Fredette wrote:
I completely forgot about free theorems! Do you have some links to resources -- I tried learning about them a while ago, but couldn't get a grasp on them... Thanks.
/Joe
On Oct 12, 2009, at 2:00 PM, Dan Piponi wrote:
On Mon, Oct 12, 2009 at 10:42 AM, muad
wrote: Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
not True==False not False==True
Done. Tested :-)
Less trivially, consider a function of signature
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
But consider: swap :: (a,a) -> (a,a)
If I find that swap (1,2) == (2,1) then I know that swap (x,y)==(y,x) for all types a and b. We only need one test. The reason is that we have a free theorem that says that for all functions, f, of type (a,a) -> (a,a) this holds:
f (g a,g b) == let (x,y) = f (a,b) in (g x',g y')
For any x and y define
g 1 = x g 2 = y
Then f(x,y) == f (g 1,g 2) == let (x',y') == f(1,2) in (g x',g y') == let (x',y') == (2,1) in (g x',g y') == (g 2,g 1) == (y,x)
In other words, free theorems can turn an infinite amount of testing into a finite test. (Assuming termination.) -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Mon, Oct 12, 2009 at 8:15 PM, Joe Fredette
Sadly not enough, I understand the basics, but the whole "proof <=> this diagram commutes" thing still seems like voodoo to me. There is a section coming up in my Topology ISP that will be on CT. So I hope that I will be able to gain some purchase on the subject, at least enough to build up a working understanding on my own.
I have a practical understanding of Functors and Natural Transformations, so working a bit with these free theorem things is kind of fun.
Actually, another germane-if-random question, why isn't there a natural transformation class? Something like:
class Functor f, Functor g => NatTrans g f a where trans :: f a -> g a
So your flatten function becomes a `trans` a la
instance NatTrans Tree [] a where trans = flatten
In fact, I'm going to attempt to do this now... Maybe I'll figure out why before you reply. :)
Diagrams are just a graphical depiction of systems of equations. Every pair of paths with the same start and end point are equal. I don't care for diagrams that much and that graphical depiction isn't that important for CT, though it has some mnemonic value. As for a NatTrans class, your example is broken in several ways. Natural transformations, though, are trivial in Haskell. type NatTrans f g = forall a. f a -> g a flatten :: NatTrans Tree [] I.e. a natural transformation between Haskell Functors are just polymorphic functions between them. In general, a polymorphic function is a dinatural transformation and the dinaturality conditions are the free theorems (or at least, special cases of the free theorem for the type, which I believe, but haven't proven, implies the full free theorem.)

I fiddled with my previous idea -- the NatTrans class -- a bit, the results are here[1], I don't know enough really to know if I got the NT law right, or even if the class defn is right. Any thoughts? Am I doing this right/wrong/inbetween? Is there any use for a class like this? I listed a couple ideas of use-cases in the paste, but I have no idea of the applicability of either of them. /Joe http://hpaste.org/fastcgi/hpaste.fcgi/view?id=10679#a10679 On Oct 12, 2009, at 8:41 PM, Brent Yorgey wrote:
Do you know any category theory? What helped me finally grok free theorems is that in the simplest cases, the free theorem for a polymorphic function is just a naturality condition. For example, the free theorem for
flatten :: Tree a -> [a]
is precisely the statement that flatten is a natural transformation from the Tree functor to the list functor:
fmap_[] g . flatten == flatten . fmap_Tree g
It gets more complicated than this, of course, but that's the basic idea.
-Brent
On Mon, Oct 12, 2009 at 02:03:11PM -0400, Joe Fredette wrote:
I completely forgot about free theorems! Do you have some links to resources -- I tried learning about them a while ago, but couldn't get a grasp on them... Thanks.
/Joe
On Oct 12, 2009, at 2:00 PM, Dan Piponi wrote:
On Mon, Oct 12, 2009 at 10:42 AM, muad
wrote: Is it possible to prove correctness of a functions by testing it? I think the tests would have to be constructed by inspecting the shape of the function definition.
not True==False not False==True
Done. Tested :-)
Less trivially, consider a function of signature
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
But consider: swap :: (a,a) -> (a,a)
If I find that swap (1,2) == (2,1) then I know that swap (x,y)==(y,x) for all types a and b. We only need one test. The reason is that we have a free theorem that says that for all functions, f, of type (a,a) -> (a,a) this holds:
f (g a,g b) == let (x,y) = f (a,b) in (g x',g y')
For any x and y define
g 1 = x g 2 = y
Then f(x,y) == f (g 1,g 2) == let (x',y') == f(1,2) in (g x',g y') == let (x',y') == (2,1) in (g x',g y') == (g 2,g 1) == (y,x)
In other words, free theorems can turn an infinite amount of testing into a finite test. (Assuming termination.) -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Joe Fredette wrote:
I fiddled with my previous idea -- the NatTrans class -- a bit, the results are here[1], I don't know enough really to know if I got the NT law right, or even if the class defn is right.
Any thoughts? Am I doing this right/wrong/inbetween? Is there any use for a class like this? I listed a couple ideas of use-cases in the paste, but I have no idea of the applicability of either of them.
/Joe
A few problems there: (1) The |a| should be natural, i.e. universally qualified in the class methods, not an argument of the typeclass. (2) Just because there's a natural transformation from F to G does not mean there's a "related" natural transformation back. The law you want is, forall (X :: *) (Y :: *) (f :: X -> Y). eta_Y . fmap_F f == fmap_G f . eta_X (3) There can be more than one natural transformation between two functors. Which means a type class is the wrong way to go about things since there can only be one for the set of type parameters. Consider for instance: type F a = [a] type G a = [a] identity :: F a -> G a identity [] = [] identity (x:xs) = (x:xs) reverse :: F a -> G a reverse = go [] where go ys [] = ys go ys (x:xs) = go (x:ys) xs nil :: F a -> G a nil = const [] ... -- Live well, ~wren

Dan Piponi wrote:
On Mon, Oct 12, 2009 at 10:42 AM, muad
wrote: Is it possible to prove correctness of a functions by testing it?
consider a function of signature
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
swap = undefined Terminates and does not swap its arguments :-) What do free theorems say about this, exactly -- do they just implicitly exclude this possibility? Neil.

Neil Brown wrote:
Dan Piponi wrote:
On Mon, Oct 12, 2009 at 10:42 AM, muad
wrote: Is it possible to prove correctness of a functions by testing it? consider a function of signature
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
swap = undefined
Terminates and does not swap its arguments :-) What do free theorems say about this, exactly -- do they just implicitly exclude this possibility?
Normally, one presumes that "undefined" (id est, calling "error") is equivalent to looping, except that calling "error" is pragmatically speaking better: it is nicer to the caller of your function (they/you get to see a somewhat more descriptive error message instead of 100% CPU without any results). Regards, Jochem -- Jochem Berndsen | jochem@functor.nl | jochem@牛在田里.com

On Mon, Oct 12, 2009 at 11:31 AM, Neil Brown
swap = undefined
Terminates and does not swap its arguments :-) What do free theorems say about this, exactly -- do they just implicitly exclude this possibility?
I'm terrible at reasoning about functions with bottoms (ie. undefined or non-termination). But I suspect that a property like this holds: if the type signature of a function f is polymorphic in a, and it doesn't produce bottom for one particular value x of type a, for some particular type a, f can never produce bottom for any choice of x in any choice of type a. (Which is not to say it might not fail, but that the failure will be an issue with x, not f.) The intuition behind this is that if a function is polymorphic in a then it never examines the a. So even if a is bottom, the function can never know it. But it could fail because f additionally accepts as argument a polymorphic function that itself accepts a's, and that fails. But then it wouldn't be f's fault, it'd be the fault of the function you passed in. This is very poor of me. There's probably a nice precise formulation of what I've just said :-) -- Dan

Could that nice precise formulation simply be Scott continuity, which in turn preserves compactness through composition and under application? Dan Piponi wrote:
On Mon, Oct 12, 2009 at 11:31 AM, Neil Brown
wrote: swap = undefined
Terminates and does not swap its arguments :-) What do free theorems say about this, exactly -- do they just implicitly exclude this possibility?
I'm terrible at reasoning about functions with bottoms (ie. undefined or non-termination).
But I suspect that a property like this holds: if the type signature of a function f is polymorphic in a, and it doesn't produce bottom for one particular value x of type a, for some particular type a, f can never produce bottom for any choice of x in any choice of type a. (Which is not to say it might not fail, but that the failure will be an issue with x, not f.)
The intuition behind this is that if a function is polymorphic in a then it never examines the a. So even if a is bottom, the function can never know it. But it could fail because f additionally accepts as argument a polymorphic function that itself accepts a's, and that fails. But then it wouldn't be f's fault, it'd be the fault of the function you passed in.
This is very poor of me. There's probably a nice precise formulation of what I've just said :-) -- Dan _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Neil Brown
swap :: (a,b) -> (b,a)
We don't need to test it at all, it can only do one thing, swap its arguments. (Assuming it terminates.)
swap = undefined
Terminates and does not swap its arguments :-)
I think this counts as non-termination, and that for semantic purposes, any bottom value -- infinite loops, exceptions, undefined -- is treated equally. -k -- If I haven't seen further, it is by standing in the footprints of giants

I have come across an example:
However, the following proof of the lovely identity: sum . map (^3) $ [1..n] = (sum $ [1..n])^2 is perfectly rigorous.
Proof: True for n = 0, 1, 2, 3, 4 (check!), hence true for all n. QED.
In order to turn this into a full-fledged proof, all you have to do is mumble the following incantation: Both sides are polynomials of degree ≤ 4, hence it is enough to check the identity at five distinct values.
from http://www.math.rutgers.edu/~zeilberg/mamarim/mamarimPDF/enquiry.pdf Now this sort of idea surely applies to more than just number theory? -- View this message in context: http://old.nabble.com/is-proof-by-testing-possible--tp25860155p26274773.html Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

On 2009-11-09 14:22 -0800 (Mon), muad wrote:
Proof: True for n = 0, 1, 2, 3, 4 (check!), hence true for all n. QED. ...
Actually, the test is that it's true for 0 through 4 is not sufficient
for a proof; you also need to prove in some way that you need do no
further tests. Showing that particular point in this case appears to me
to lie outside the realm of testing.
cjs
--
Curt Sampson

On 10 Nov 2009, at 05:52, Curt Sampson wrote:
On 2009-11-09 14:22 -0800 (Mon), muad wrote:
Proof: True for n = 0, 1, 2, 3, 4 (check!), hence true for all n. QED. ...
Actually, the test is that it's true for 0 through 4 is not sufficient for a proof;
It's enough testing...
you also need to prove in some way that you need do no further tests.
...and you can calculate how much testing is enough by computing an upper bound on the polynomial degree of the expression. (The summation operator increments degree, the difference operator decreases it, like in calculus.)
Showing that particular point in this case appears to me to lie outside the realm of testing.
You need to appeal to a general theorem about expressions of a particular form, but if that theorem is in the library, the only work you need do is to put the expressions in that form. This is sometimes described as the "reflective" proof method: express problem in language capturing decidable fragment; hit with big stick. There are lots of other examples of this phenomenon. The zero-one principle for sorting networks is a favourite of mine. I'm sure there's more to be found here. It smells a bit of theorems for free: the strength comes from knowing what you don't. All the best Conor

Conor McBride wrote:
....and you can calculate how much testing is enough by computing an upper bound on the polynomial degree of the expression. (The summation operator increments degree, the difference operator decreases it, like in calculus.)
This is sometimes described as the "reflective" proof method: express problem in language capturing decidable fragment; hit with big stick.
The fact that summation maps polynomials to polynomials needs to be proven, of course, and I guess that this proof is not much simpler than proving sum . map (^3) $ [1..n] = (sum $ [1..n])^2 in the first place. But once proven, the former can be reused ad libitum. Regards, apfelmus -- http://apfelmus.nfshost.com

On 2009-11-10 08:24 +0000 (Tue), Conor McBride wrote:
On 10 Nov 2009, at 05:52, Curt Sampson wrote:
This is sometimes described as the "reflective" proof method: express problem in language capturing decidable fragment; hit with big stick.
Well, that's pretty sweet. *And* you get to use a big stick. Who can argue with that?
I'm sure there's more to be found here. It smells a bit of theorems for free: the strength comes from knowing what you don't.
Yup. But this seems to me to be heading towards keeping proofs with your
code (which we're already doing in sort of a simple way with our type
systems; how long can it be until we're embedding Coq or Agda proofs in
our production Haskell code?). That's heading well away from testing.
(And I approve.)
cjs
--
Curt Sampson
participants (15)
-
Ben Franksen
-
Brent Yorgey
-
Conor McBride
-
Curt Sampson
-
Dan Piponi
-
Dan Weston
-
Derek Elkins
-
Eugene Kirpichov
-
Heinrich Apfelmus
-
Jochem Berndsen
-
Joe Fredette
-
Ketil Malde
-
muad
-
Neil Brown
-
wren ng thornton