newbie question about denotational semantics

Hi all! I'm just started learning denotational semantics and have a simple question. Suppose, we have simple language L (e.g. some form of lambda-calculus) and have a semantic function: E : Term_L -> Value. Now, suppose, we extended our language with some additional side-effects (e.g. state or continuations). For this purpose we first add new syntactic construct to language (e.g 'call/cc'), and then write semantic via monadic approach. We have: L_ext = L + 'call/cc' E_ext : Term_{L_ext} -> M (Value), where M is some monad (Cont Value for our case). Now, I want to proof statement that original and extended semantics are coincide on expressions that don't use new constructs (call/cc). It would be natural to formulate such "theorem": "Theorem". If t is expression, that doesn't use new construct, then E_orig(t) = E_ext(t). Alas, this equation doesn't make any sense, because E_orig(t) and E_ext(t) are elements of different domains: Value and M(Value). The obvious "fix" it to use "unit" operation of monad ("return" in Haskell syntax) to put both values into the same space. So we get: "Theorem" (second try). For all t, bla-bla-bla.. unit_M(E_orig(t)) = E_ext(t). However, this still doesn't work because Values are in fact different: -- in original semantic we have (more source code available below) data Value1 = Number Int | Func (Value1 -> Value1) -- in extended semantic we have data Value2 = Number Int | Func (Value2 -> M Value2) ^ note this M The monad M seems to be "wired in" recursively in Value definition.
From the mathematical point of view it is common to use projection or injection when one need to compare functions to different sets. However, I don't see any obvious injection Value1 -> Value2 or projection Value2 -> Value1.
Some monads can be "removed", for example StateT can be eliminated with runState with initial state and etc. However, some monads, for example Error, Maybe, non-deteminism monad (powerdomain) don't have such "removal function". What is needed is general definition that can be used for _any_ monad M. The question is: how can this "theorem" be formulated (and prooved) in mathematically precise way? It seems that I missing something very trivial, so just reference to article will be nice. The more general question is: suppose original language already has side-effects and was described using monad M_1. Now we add additional effects and use bigger monad M_2. The same question arises: how one can compare elements in Value domain defined via M_1 and M_2? With best regards, Alexander. PS. Some code I played with: -- original semantic import Control.Monad.Reader import Control.Monad.State import qualified Data.Map as Map type Id = String type Env = Map.Map String Value stdops :: Map.Map String (Int -> Int -> Int) stdops = Map.fromList [("+", (+)), ("-", (-)), ("*", (*))] data Expr = Const Int | Var String | BinOp String Expr Expr | Lambda Id Expr | Appl Expr Expr deriving Show data Value = Number Int | Func (Value -> Value) instance Show Value where show (Number v) = show v show (Func _) = "*func*" eval :: Expr -> Reader Env Value eval (Const c) = return (Number c) eval (Var i) = do env <- ask return $ env Map.! i eval (BinOp op x y) = do vx <- eval x vy <- eval y let opfun = stdops Map.! op case (vx, vy) of (Number a, Number b) -> return $ Number (opfun a b) _ -> error "standard operation on not-numbers" eval (Lambda i expr) = do env <- ask let f = \v -> let newenv = Map.insert i v env in runReader (eval expr) newenv return (Func f) eval (Appl fun arg) = do vfun <- eval fun varg <- eval arg case vfun of Func f -> return $ f varg _ -> error "application of not-function" do_eval :: Expr -> Value do_eval expr = runReader (eval expr) Map.empty -- extended semanic import Control.Monad.Reader import Control.Monad.Cont import qualified Data.Map as Map type Id = String type Env = Map.Map String Value type M = Cont Value stdops :: Map.Map String (Int -> Int -> Int) stdops = Map.fromList [("+", (+)), ("-", (-)), ("*", (*))] data Expr = Const Int | Var String | BinOp String Expr Expr | Lambda Id Expr | Appl Expr Expr | CallCC Expr deriving Show data Value = Number Int | Func (Value -> M Value) instance Show Value where show (Number v) = show v show (Func _) = "*func*" eval :: Expr -> ReaderT Env M Value eval (Const c) = return (Number c) eval (Var i) = do env <- ask return $ env Map.! i eval (BinOp op x y) = do vx <- eval x vy <- eval y let opfun = stdops Map.! op case (vx, vy) of (Number a, Number b) -> return $ Number (opfun a b) _ -> error "standard operation on not-numbers" eval (Lambda i expr) = do env <- ask let f = \v -> let newenv = Map.insert i v env in runReaderT (eval expr) newenv return (Func f) eval (Appl fun arg) = do vfun <- eval fun varg <- eval arg case vfun of Func f -> lift $ f varg _ -> error "application of not-function" eval (CallCC expr) = do f <- eval expr case f of Func vf -> lift $ callCC (\k -> vf (Func k)) _ -> error "call/cc with not-function" do_eval :: Expr -> Value do_eval expr = runCont (runReaderT (eval expr) Map.empty) id

I'm still getting my head around this myself, but I know a few references that might help (maybe not directly, but they're in the ballpark). 1 I believe the phrase "natural lifting" or "naturality of liftings" is what you're after when you attempt to compare a monad and a "bigger monad" that includes the affects of the first. I recall this concept from Liang and Hudak's modular monadic semantics work, but I'm having a heck of a time quickly finding it in their papers. Try at least section 8.1 in Monad Transformers and Modular Interpreters. 2 Another example that helped me when getting a feel for reasoning about monadic code (which is the basis of what we're doing here) was William Harrison's "Proof Abstraction for Imperative Languages

[my mail program hiccuped and chopped my message, sorry] 2 Another example that helped me when getting a feel for reasoning about monadic code (which is the basis of what we're doing here) was William Harrison's "Proof Abstraction for Imperative Languages". It uses monads and some of the notions like the ones you're in search of. 3 Lastly, we're reasoning about folds with denotational semantics. Graham Hutton's "Fold and Unfold for Program Semantics" was a great read for me. Using its techniques will likely shorten up any proof we've got. This is some of my favorite stuff, let me know how you fare with it! Good luck, Nick

Alexander Vodomerov
The question is: how can this "theorem" be formulated (and prooved) in mathematically precise way?
It seems to me that you can define a type-directed translation from the old denotations to the new ones. Also check out: Robert Cartwright and Matthias Felleisen. 1994. Extensible denotational language specifications. In Theoretical aspects of computer software: International symposium, ed. Masami Hagiya and John C. Mitchell, 244-272. Lecture Notes in Computer Science 789, Berlin: Springer-Verlag. http://www.ccs.neu.edu/scheme/pubs/tacs94-cf.dvi.gz http://www.ccs.neu.edu/scheme/pubs/tacs94-cf.ps.gz -- Edit this signature at http://www.digitas.harvard.edu/cgi-bin/ken/sig I think that it is much more likely that the reports of flying saucers are the result of the known irrational characteristics of terrestrial intelligence rather than the unknown rational efforts of extraterrestrial intelligence. -Richard Feynman
participants (3)
-
Alexander Vodomerov
-
Chung-chieh Shan
-
Nicolas Frisby