
On Sat, 28 Jul 2012, damodar kulkarni
So a language is referentially transparent if replacing a sub-term with
another with the same denotation doesn't change the overall meaning? But then isn't any language RT with a sufficiently cunning denotational semantics? Or even a dumb one that gives each term a distinct denotation.
That's neat ... I mean, by performing sufficiently complicated brain gymnastics, one can do equational reasoning on C subroutines (functions!) too.
So, there is no "big" difference between C and Haskell when it comes to equational reasoning...
Regards, Damodar
The general theory of equational reasoning is often applicable because often we have a collection of sentences S of the form x = y where x and y are "terms", and often we have a set of models M, where given a system of backround assumptions, we can ask of the above sentence, call it s, whether in some particular model m, is it the case that eval(x) = eval(y) where now the sentence above is not a sentence of S, but rather a fact, under assumptions, about the model m, namely that what the term x means in m is equal to what the term y means in m. Here we have written what the term x refers to in m as "eval(x)". For example x might be "(vector-ref part-of-backround 17)" in some Lisp source code, given as text, and y might "(vector-ref part-of-backround 116)", and part-of-backround is some vector which comes from, in general both the model m and, this is a "scare phrase", the backround assumptions. Oi, please forgive tangle here! Note that the = in the sentence s is not the same = as in the model m, and we should in a first text write them with different symbols. As terms x and y are not equal-as-terms. It is only their "values" in m that are equal-in-m. (Obtuse Precisian Crank Further Remark: And the general theory of equality often applies to whatever sort of object lies in the vector part-of-backround at position 17, and also to whatever lies at position 116. THE ARGUMENT IS COMPLETE! But see http://iml.univ-mrs.fr/~girard/TS2.pdf for more and different.) Personal remark: I remember the wonderful feeling of having a weight lifted from me when I read this in Roger C. Lyndon's "Notes on Logic": Oi, ah, I cannot find the quote tonight. The quote was something like In this book we shall work at the level of precision which today is standard in works on the theory of groups, say. by which Lyndon meant that certain extraordinarily finicky considerations were, by the date of composition, standard, and the reader might be trusted to handle these without error. Of course, as a beginner coming to Haskell, and as an old Lisper (my first Lisp was LISP 1.5, for which see http://www.softwarepreservation.org/projects/LISP/book/LISP%201.5%20Programm...) I wish that Haskell made more patent to my un-practiced eye more of such finicky details^Wbasics. Naturally what I'd like is Haskell source code presented in the usual parenthesized prefix notation of Lisp, with of course, every term having an immediately available complete explicit typing, which could perhaps be presented when one clicks on the term (perhaps plists might be part of the mechanism). BRING OUT THE CORE! oo--JS.
On Sat, Jul 28, 2012 at 1:47 AM, Alexander Solla
wrote: On Fri, Jul 27, 2012 at 12:06 PM, Ross Paterson
wrote: On Fri, Jul 27, 2012 at 07:19:40PM +0100, Chris Dornan wrote:
So a language is referentially transparent if replacing a sub-term with another with the same denotation doesn't change the overall meaning?
Isn't this just summarizing the distinguishing characteristic of a denotational semantics?
Right, so where's the substance here?
My understanding is that RT is about how easy it is to carry out _syntactical_ transformations of a program that preserve its meaning. For example, if you can freely and naively inline a function definition without having to worry too much about context then your PL is deemed to possess lots of RT-goodness (according to FP propaganda anyway; note you typically can't freely inline function definitions in a procedural programming language because the actual arguments to the function may involve dastardly side effects; even with a strict function-calling semantics divergence will complicate matters).
Ah, but we only think that because of our blinkered world-view.
Another way of looking at it is that the denotational semanticists have created a beautiful language to express the meanings of all those ugly languages, and we're programming in it.
A third way to look at it is that mathematicians, philosophers, and logicians invented the semantics denotational semanticists have borrowed, specifically because of the properties derived from the philosophical commitments they made. Computer science has habit of taking ideas from other fields and merely renaming them. "Denotational semantics" is known as "model theory" to everyone else.
Let's consider a referentially /opaque/ context: quotation marks. We might say "It is necessary that four and four are eight. And we might also say that "The number of planets is eight." But we cannot unify the two by substitution and still preserve truth functional semantics. We would get "It is necessary that four and four are the number of planets" (via strict substitution joining on 'eight') or a more idiomatic phrasing like "It is necessary that the number of planets is four and four".
This is a big deal in logic, because there are a lot of languages which quantify over real things, like time, possibility and necessity, etc., and some of these are not referentially transparent. In particular, a model for such a language will have to use "frames" to represent context, and there typically is not a unique way to create the framing relation for a logic.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe