
On Fri, Dec 11, 2009 at 10:04 AM, pbrowne
Two questions 1) Is this correspondence between operational and logical semantics a desirable property of *purity* in the Haskell world?
2) What, if anything, prevents the execution of a Haskell term from being a proof in equational logic?
It seems that Haskells type system has this logical purity[4]
My motivation for such questions is that I am researching various *algebraic styles* such as Haskell, OCAML, Maude, and CafeOBJ. My current general research question is; how closely do executable terms in these languages match their *logical* meaning. For example, languages like Java need additional logical scaffolding (like JML[3]) to give them logical machine readable meaning.
I admit that I don't fully know what you are talking about. What do you mean by logical meaning -- as opposed to what other sort of meaning? The equational reasoning in Haskell comes from its denotational semantics (which, mind, is not completely formally specified -- but once you expand typeclasses to dictionary passing style, you get a standard DCPO semantics). If terms are equal in denotation, then they are equal by observation (modulo efficiency). So it is not equations that directly define the semantics of Haskell, but rather mapping the syntax to mathematical objects. We derive Haskell's equational reasoning from the equational reasoning in the semantic domain. That's how I see it as a user of the language. At the most abstract level, omitting some of the practical details of the language (such as the built-in numeric types), Haskell's reduction follows beta reduction with sharing, and so at that level I think the answer to (2) is "nothing". Luke
I *guess* that the gap between operational and logical semantics is rather less in Haskell that in Java, and I further *guess* that the gap less again in Maude or CafeOBJ. I think a good answer to question 2) would remove the *guess* from the Haskell case.
Pat [1] http://rewriting.loria.fr/documents/plaisted.ps.gz [2]http://www.forsoft.de/~rumpe/icse98-ws/tr/0202Diaconescu.ps [3] http://www.eecs.ucf.edu/~leavens/JML/ [4]http://www.haskell.org/haskellwiki/Type_arithmetic
Dan Doel wrote:
The use of equational reasoning in a language like Haskell (as advocated by, say, Richard Bird) is that of writing down naive programs, and transforming them by equational laws into more efficient programs. Simple examples of this are fusion laws for maps and folds, so if we have:
foo = foldr f z . map g . map h
we can deforest it like so:
foldr f z . map g . map h = (map f . map g = map (f . g)) foldr f z . map (g . h) = (foldr f z . map g = foldr (f . g) z) foldr (f . g . h) z
now, this example is so simple that GHC can do it automatically, and that's a side benefit: a sufficiently smart compiler can use arbitrarily complex equational rules to optimize your program. But, for instance, Bird (I think) has a functional pearl paper on incrementally optimizing a sudoku solver* using equational reasoning such as this.
Now, you can still do this process in an impure language, but impurity ruins most of these equations. It is not true that:
map f . map g = map (f . g)
if f and g are not pure, because if f has side effects represented by S, and g has side effects represented by T, then the left hand side has side effects like:
T T T ... S S S ...
while the right hand side has side effects like:
T S T S T S ...
so the equation does not hold.
This is all, of course, tweaking programs using equational rules you've checked by hand. Theorem provers are for getting machines to check your proofs.
Hope that helps. -- Dan
* If you check the haskell wiki page on sudoku solvers, there's an entry something like 'sudoku ala Bird', written by someone else (Graham Hutton, comes to mind), that demonstrates using Bird's method.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe