Re: Proofs and commercial code -- was Haskell Weekly News: Issue 85 - September 13, 2008

Daryoush,
One of the subtle points about computation is that -- via Curry-Howard --
well-typed programs are *already* proofs. They may not be the proof you were
seeking (there are a lot of programs from Int -> Int), or prove the theorem
you were seeking to prove (Ord a => [a] -> [a] is a lot weaker than 'this
program sorts lists of ordered types'), but they are witnesses of any type
they type-check against. In a system that has principal types, they are
proofs of their principal type. In this sense, the utility of proofs is
widespread: they are programs.
There is a really subtle dance between types that are rich enough to express
the theorems you would have your programs be proofs of and the termination
of type-checking for that type system. That's why people often do proofs by
hand -- because the theorems they need to prove require a greater degree of
expressiveness than is available in the type system supported by the
language in which the programs are expressed. Research has really been
pushing the envelope of what's expressible in types -- from region and
resource analysis to deadlock-freedom. Again, in that setting the program is
a witness of a property like
- won't leak URLs to unsavory agents
- won't hold on to handles past their garbage-collect-by date
- won't get caught in a 'deadly embrace' (A is waiting for B, B is
waiting for A)
Sometimes, however, and often in mission critical code, you need stronger
assurances, like
- this code really does implement a FIFO queue, or
- this code really does implement sliding window protocol, or
- this code really does implement two-phase-commit-presumed-abort
It's harder -- in fact i think it's impossible for bullet item 2 above -- to
squeeze these into terminating type-checks. In those cases, you have to
escape out to a richer relationship between 'theorem' (aka type) and 'proof'
(aka program). Proof assistants, like Coq, or Hol or... can be very helpful
for automating some of the tasks, leaving the inventive bits to the humans.
In my experience, a proof of a theorem about some commercial code is pursued
when the cost of developing the proof is less than some multiple of the cost
of errors in the program in the life-cycle of that program. Intel, and other
hardware vendors, for example, can lose a lot of business when the
implementation of floating point operations is buggy; and, there is a
breaking point where the complexity/difficulty/cost of proving the
implementation correct is sufficiently less than that of business lost that
it is to their advantage to obtain the kind of quality assurance that can be
had from a proof of correctness.
One other place where proofs of correctness will be pursued is in
mathematical proofs, themselves. To the best of my knowledge, nothing
mission-critical is currently riding on the proof of the 4-color theorem.
The proof -- last i checked -- required programmatic checking of many cases.
Proving the program -- that implements the tedious parts of the proof --
correct is pursued because of the culture and standard of mathematical
proof.
Best wishes,
--greg
Date: Sat, 13 Sep 2008 22:24:50 -0700
From: "Daryoush Mehrtash"

Does it make sense to use proof to validate that a given monad
implementation obeys its laws?
daryoush
2008/9/14 Greg Meredith
Daryoush,
One of the subtle points about computation is that -- via Curry-Howard -- well-typed programs are already proofs. They may not be the proof you were seeking (there are a lot of programs from Int -> Int), or prove the theorem you were seeking to prove (Ord a => [a] -> [a] is a lot weaker than 'this program sorts lists of ordered types'), but they are witnesses of any type they type-check against. In a system that has principal types, they are proofs of their principal type. In this sense, the utility of proofs is widespread: they are programs.
There is a really subtle dance between types that are rich enough to express the theorems you would have your programs be proofs of and the termination of type-checking for that type system. That's why people often do proofs by hand -- because the theorems they need to prove require a greater degree of expressiveness than is available in the type system supported by the language in which the programs are expressed. Research has really been pushing the envelope of what's expressible in types -- from region and resource analysis to deadlock-freedom. Again, in that setting the program is a witness of a property like
won't leak URLs to unsavory agents won't hold on to handles past their garbage-collect-by date won't get caught in a 'deadly embrace' (A is waiting for B, B is waiting for A)
Sometimes, however, and often in mission critical code, you need stronger assurances, like
this code really does implement a FIFO queue, or this code really does implement sliding window protocol, or this code really does implement two-phase-commit-presumed-abort
It's harder -- in fact i think it's impossible for bullet item 2 above -- to squeeze these into terminating type-checks. In those cases, you have to escape out to a richer relationship between 'theorem' (aka type) and 'proof' (aka program). Proof assistants, like Coq, or Hol or... can be very helpful for automating some of the tasks, leaving the inventive bits to the humans.
In my experience, a proof of a theorem about some commercial code is pursued when the cost of developing the proof is less than some multiple of the cost of errors in the program in the life-cycle of that program. Intel, and other hardware vendors, for example, can lose a lot of business when the implementation of floating point operations is buggy; and, there is a breaking point where the complexity/difficulty/cost of proving the implementation correct is sufficiently less than that of business lost that it is to their advantage to obtain the kind of quality assurance that can be had from a proof of correctness.
One other place where proofs of correctness will be pursued is in mathematical proofs, themselves. To the best of my knowledge, nothing mission-critical is currently riding on the proof of the 4-color theorem. The proof -- last i checked -- required programmatic checking of many cases. Proving the program -- that implements the tedious parts of the proof -- correct is pursued because of the culture and standard of mathematical proof.
Best wishes,
--greg
Date: Sat, 13 Sep 2008 22:24:50 -0700 From: "Daryoush Mehrtash"
Subject: Re: [Haskell-cafe] Haskell Weekly News: Issue 85 - September 13, 2008 To: "Don Stewart" Cc: Haskell Cafe Message-ID: Content-Type: text/plain; charset="iso-8859-1"
What I am trying to figure out is that say on the code for the IRC bot that is show here
http://www.haskell.org/haskellwiki/Roll_your_own_IRC_bot/Source
What would theorem proofs do for me?
Daryoush -- L.G. Meredith Managing Partner Biosimilarity LLC 806 55th St NE Seattle, WA 98105
+1 206.650.3740
http://biosimilarity.blogspot.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Daryoush Mehrtash wrote:
Does it make sense to use proof to validate that a given monad implementation obeys its laws?
Absolutely, and this is usually done by hand. Take for instance the state monad. newtype State s a = State { runState :: s -> (a,s) } instance Monad (State s) where return x = State $ \s -> (x,s) (State m) >>= f = State $ \s -> let (x,s') = m s in runState (f x) s' The first monad law is return a >>= f = f a And here is proof: return a >>= f = { definition of return } State (\s -> (a,s)) >>= f = { definition of >>= } State $ \s -> let (x,s') = (\s -> (a,s)) s in runState (f x) s' = { apply lambda abstraction } State $ \s -> let (x,s') = (a,s) in runState (f x) s' = { pattern binding } State $ \s -> runState (f a) s = { eta reduction } State $ runState (f a) = { State . runState = id } f a Exercise: Prove the other monad laws, i.e. m >>= return = m (m >>= g) >>= h = m >>= (\x -> g x >>= h) Regards, apfelmus
participants (3)
-
apfelmus
-
Daryoush Mehrtash
-
Greg Meredith