
2011/12/24 MigMit
Отправлено с iPad
24.12.2011, в 18:50, Alexander Solla
написал(а): In the same way, denotational semantics adds features which do not apply to a theory of finite computation.
And why exactly should we limit ourselves to some theory you happen to like?
Because the question was about MY IDEAL. I have spoken at length why my ideal is preferable to the current state of affairs. You still continue to misunderstand my point, and respond with red herrings.
The /defining/ feature of a bottom is that it doesn't have an interpretation.
What do you mean by "interpretation"?
You know, the basic notion of a function which maps syntax to concrete values.
http://en.wikipedia.org/wiki/Model_theory
But (_|_) IS a concrete value.
Um, perhaps in denotational semantics. But even in that case, it is not a HASKELL value. You seem to be mixing up syntax and semantics.
But they ARE very similar to other values. They can be members of
otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation.
Mere syntax.
So what?
So we give meaning to syntax through our semantics. That is what this whole conversation is all about. I am proposing we give Haskell bottoms semantics that bring it in line with the bottoms from various theories including lattice theory, the theory of sets, the theory of logic, as opposed to using denotational semantics' bottom semantic, which is unrealistic for a variety of reasons. Haskell bottoms can't be compared, due to Rice's theorem. Haskell bottoms cannot be given an interpretation as a Haskell value. What happens to referential transparency when distinct things are all defined by the same equation? ... = let x = x in x undefined, seq, unsafeCoerce, and many other "primitives" are defined using that equation. (See GHC.Prim) The Haskell definition for these distinct things /does nothing/. It loops. The semantics we get for them (an error message if we use undefined, a causal side-effect if we use seq, type coercion if we use unsafeCoerce) is done /magically/ by the compiler. As far as Haskell, as a language, is concerned, all of these are bottom, and they are all /equal/, because of referential transparency/substitutionality. Oops. So Haskell, as a logic, is telling us that all of these "distinct" bottoms are not so distinct. And that any interpretation function providing semantics should map them all to the same value in the model.
Every other Haskell value /does/ have an interpretation.
So, (_|_) is bad, but (1, _|_) is good?
I did not introduce "good" and "bad" into this discussion. I have merely said (in more words) that I want my hypothetical perfect language to prefer OPERATIONAL (model) SEMANTICS for a typed PARACONSISTENT LOGIC over the DENOTATIONAL SEMANTICS which the official documentation sometimes dips into.
Well, that's a different story.
No, it's the same story that I've been telling.
But it seems to me that the term "Haskell-like" won't apply to that kind of language. Also, it seems to me (though I don't have any kind of proof) that denotational semantics is something that is much simpler.
Haskell is already a paraconsistent logic. How is giving Haskell operational and interpretive semantics not "Haskell-like"? Its denotational semantics is a platonic completion of the logical semantics.
It is clear that denotational semantics is a Platonic model of constructive computation.
Could you please stop offending abstract notions?
What? Platonic does not mean "bad". But it does mean that the theory is "too big" to be appropriate in this case.