
Hi, Below is an assumption (which could be wrong) and two questions. ASSUMPTION 1 Only smaller models can be specified using the sub-class mechanism. For example if we directly make a subclass A => B then every instance of B must also be an instance of A (B implies A or B is a subset of A). Hence, type classes cannot be combined using sub-classing to provide specifications of bigger models. QUESTIONS 1) If assumption 1 is correct, is there a mechanism whereby the module system can be used to construct bigger models? 2) If there is such a mechanism does it involve type classes or is it a module only solution? Regards, Pat This message has been scanned for content and viruses by the DIT Information Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie

On 09/26/2010 01:32 PM, Patrick Browne wrote:
Hi, Below is an assumption (which could be wrong) and two questions.
ASSUMPTION 1 Only smaller models can be specified using the sub-class mechanism. For example if we directly make a subclass A => B then every instance of B must also be an instance of A (B implies A or B is a subset of A). Hence, type classes cannot be combined using sub-classing to provide specifications of bigger models. I'm not sure what you mean by "models" and "subclass mechanism". For example, any set is a model of the first order logic. Presumably, we can "subclass" (i.e., add axioms to) from FOL into Peano Arithmetic. That cuts down on models that satisfy the axioms from "any set" to... well, the models of arithmetic (that is, the natural numbers, the class of ordinals less than the first uncountable ordinal, etc). A finite set is a model of FOL. But no finite set is a model of PA. QUESTIONS 1) If assumption 1 is correct, is there a mechanism whereby the module system can be used to construct bigger models?
Bigger how? Under logical implication and its computational analogue? That is to say, do you want the model to be more SPECIFIC, describing a smaller class of objects more richly (i.e, with "more" logical implications to be made) or do you want the model to be more GENERAL, and contain the less specific submodels? This is how "forcing" works.
2) If there is such a mechanism does it involve type classes or is it a module only solution?
Regards, Pat
This message has been scanned for content and viruses by the DIT Information Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Alexander Solla wrote:
On 09/26/2010 01:32 PM, Patrick Browne wrote:
Bigger how? Under logical implication and its computational analogue? That is to say, do you want the model to be more SPECIFIC, describing a smaller class of objects more richly (i.e, with "more" logical implications to be made) or do you want the model to be more GENERAL, and contain the less specific submodels? This is how "forcing" works.
My idea of bigger is based on the import modes and parameterized modules of the Maude/CafeOBJ languages, where smaller theories are used to construct larger theories (and/or objects). In these languages I guess theories (loose specifications or interfaces) correspond to your *logical implication* and objects (or tight specification) correspond to *computation* at the ordinary programming level. The axioms of the theories must hold at the programming level. What does the term *forcing* mean? See: http://books.google.ie/books?id=Q0H-n4Wz2ssC&pg=PA41&lpg=PA41&dq=model+expansion+cafeobj&source=bl&ots=_vCFynLmca&sig=07V6machOANGM0FTgPF5pcKRrrE&hl=en&ei=YkSgTPn0OoqOjAfb0tWVDQ&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBgQ6AEwAA#v=onepage&q=model%20expansion%20cafeobj&f=false Thanks, Pat This message has been scanned for content and viruses by the DIT Information Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie

On 09/27/2010 12:25 AM, Patrick Browne wrote:
On 09/26/2010 01:32 PM, Patrick Browne wrote: / Bigger how? Under logical implication and its computational analogue? That is to say, do you want the model to be more SPECIFIC, describing a smaller class of objects more richly (i.e, with "more" logical implications to be made) or do you want the model to be more GENERAL, and contain the less specific submodels? This is how "forcing" works. My idea of bigger is based on the import modes and parameterized modules of the Maude/CafeOBJ languages, where smaller theories are used to construct larger theories (and/or objects). In these languages I guess
Alexander Solla wrote: theories (loose specifications or interfaces) correspond to your *logical implication* and objects (or tight specification) correspond to *computation* at the ordinary programming level. The axioms of the theories must hold at the programming level. Well, my question was more along the lines of "do you want bigger (more specific) theories (and so more "specific" models to interpret them)?" or do you want to generalize from a given theory? To do the latter with Haskell, you might create a module that allows exporting your "axiom/interface" functions. And maybe create a wrapper that drops axioms/interfaces/constraints. This is assuming you've organized your modules to embody specific theories (an assumption not usually true of my code, at least under this strict interpretation).
To become more specific, you would just import the a module and add new axiom/interface functions. Doing similar constructions with type classes is possible. I think you might have to use witness types (or even a nice functorial wrapper around your target value in the original algebra, or both) to do generalizations of type classes. For example: class Uneditable obj where a :: a -> obj b :: b -> obj class Refactored obj witness where a' :: Maybe (a -> obj) b' :: Maybe (a -> obj) data EmptyFilter -- I think the name of the extension needed for this is 'EmptyDataDecls' data NoA data NoB instance Uneditable obj => Refactored obj EmptyFilter where a' = Just a; b' = Just b instance Uneditable obj => Refactored obj NoA where a' = Nothing; b' = Just b etc You would be using the Maybe part to "switch" functions/axioms on and off. The witness type links the obj type to the intended generalization of Uneditable. By the way, this is a decent example of forcing for types: given a type that satisfies a theory (that is, whose values are models for the theory), you generate a set of "names" which links the type to "new" theories that are related in a fairly obvious way (in this case, via Maybe). This represents an embedding of the new theory in the old theory. (And, dually, it represents an embedding of the old model in the new one) There's more to it than that, insofar as there is stuff to be proved. But Haskell does it for you.
What does the term *forcing* mean?
Forcing is a technique to create models from axiomatizations. It is the countable (and beyond) extension of creating a model by adding elements piecewise, assuming they satisfy the theory. Indeed, you end up using "filters" (the dual to an ideal) to ensure you get rid of all the elements that don't satisfy a model. The wikipedia page goes over some of this in term so of constructing a model in terms of a language over a theory for the model, and reinterpreting the new language in terms of the old one. http://en.wikipedia.org/wiki/Forcing_(mathematics)
From what I looked at, their logical notions/terminology are pretty standard. In general, if you want the class of models to decrease, you must make the class of theories for them increase (under inclusion order), and vice-versa.

Alexander Solla wrote:
Doing similar constructions with type classes is possible. I think you might have to use witness types (or even a nice functorial wrapper around your target value in the original algebra, or both) to do generalizations of type classes. For example:
class Uneditable obj where a :: a -> obj b :: b -> obj
class Refactored obj witness where a' :: Maybe (a -> obj) b' :: Maybe (a -> obj)
data EmptyFilter -- I think the name of the extension needed for this is 'EmptyDataDecls' data NoA data NoB
instance Uneditable obj => Refactored obj EmptyFilter where a' = Just a; b' = Just b instance Uneditable obj => Refactored obj NoA where a' = Nothing; b' = Just b Alexander, Thank you for your excellent explanation and example. Due to my lack of knowledge, I do not fully understand what is going in the example.
I had a look at the types of a and a’. *Main> :t a a :: forall a obj. (Uneditable obj) => a -> obj *Main> :t a' a' :: forall witness a obj. (Refactored obj witness) => Maybe (a -> obj) Could you explain the example a little more. Apologies for my slowness. Pat This message has been scanned for content and viruses by the DIT Information Services E-Mail Scanning Service, and is believed to be clean. http://www.dit.ie

On 09/28/2010 03:03 AM, Patrick Browne wrote:
I had a look at the types of a and a’. *Main> :t a a :: forall a obj. (Uneditable obj) => a -> obj *Main> :t a' a' :: forall witness a obj. (Refactored obj witness) => Maybe (a -> obj)
Could you explain the example a little more. This is going to be a long email. Logic is a bit of a rambling subject. And then I went out and bought some beer. ;0)
First thing to note: the importance of the types here is that the stuff contained inside the Maybe HAS to be the same type as the corresponding "Uneditable" function. On a conceptual level, we're trying to quantify over the axioms embodied by "Uneditable". It isn't important to my point what the axioms types are. What matters is that the new axiom scheme as the same "inner" type, wrapped in a Maybe. If you're looking to do object orientation or similar things, you would probably prefer something like: class Uneditable obj where a :: obj -> a; b :: obj -> a and class Refactored obj witness where a' :: Maybe (obj -> a), b' :: Maybe (obj -> a) You can do slick things with existential quantification too. (To a mathematician, "slick" means "so elegant it looks trivial at first glance, but deep and unifying upon further inspection". Compare to the mathematician joke about obviousness.[1] Category theory is the slickest of mathematics, under an informal, subjective ordering of mine. Mathematical logic is pretty damn slick too, but it has more primitives and so is less elegant. In different words, it is more computational than category theory, which is compositional by design) I am not sure what you don't understand, so I will start at the beginning, at least in broad strokes. So, going back to the "basic" theory of logic, you have axiomatizations, theories, and models. An axiomatization is a "chosen" finite a set of sentences (it's up to you to pick axioms that encode truths about what you want to study/program). A theory is a set of sentences which is closed under logical implication, and a model is a set of objects[1] which "satisfies" an axiomatization (equivalently, the theory generated by an axiomatization, because of the compactness theorem) In order to make an axiomatization "more specific", you add axioms. In order to make it "more general", you drop axioms. So, every group is a monoid, but not vice-versa: the non-associative monoid of subtraction over the integers is not a group. The monoid does not satisfy the axiom of associativity, and so must be excluded from the class of models for the group axioms. Again, theories are "the same way", but you have to deal with logical closure. I mean, if I was to remove a sentence from a theory, it *might* still generate the same theory, because the sentence I removed was a logical consequence of the others. For example, we can have a theory of arithmetic where an axiom states that 1 + 1 = 2, in addition to the Peano axioms. If we drop the 1 + 1 = 2 axiom and generate the closure of the new theory, we will see that 1 + 1 = 2 anyway. The notion of "satisfaction" is important. As I said, a model for an axiomatization is (conceptually) a set of objects which satisfies an axiomatization or theory. In short, to show that a model satisfies an axiomatization, you need to provide an interpretation function for the model. (This is what type class instances are for, under the Howard-Curry correspondence) An interpretation is a function which takes a sentence from the axiomatization and an object from the model and maps them to a truth value, under certain consistency constraints. For example, I(a 'and' b) = I(a) "and" I(b), where 'and' is a symbolic notion of conjunction and also "and" is a semantic notion of conjunction. (In particular, an interpretation I is a "proof-to-truth homomorphism", and can potentially be reified as a functor, among other things. Now it gets kind of tricky. There are a few correspondences and facts to put together. First is the Howard-Curry Correspondence theorem, which tells us that in a typed programming language, functions can be re-interpreted as proofs of type-level theorems in a corresponding typed logic. That is, typed programming languages are typed constructive logics. So all the "basic" mathematical logic stuff I have talked about applies. But, I never said where axiomatizations and their models "live" on the programming side. And in fact, models are particularly tricky, because they "live" in "shifting" grounds. For example, any logically consistent theory is its own model! The objects of the model (which the interpretation function interprets) are sentences from the theory the model models. The axioms for the theory are to be declared "true" (with respect to the model), and it is immediately obvious that the "identity functor" offers an interpretation, since I(A and B) exactly equals I(A) and I(B). This is called the "free model" for a theory/axiomatization. It is the "most general" model of an axiomatization, in a specific sense.[3] Another correspondence to consider: every function is a relation, and every relation can be encoded as predicate by enumerating its tuples. So a function is pretty trivially equivalent to some predicate, since a predicate's extension is a set of things which it is "true" of. This is sort of the root of the Howard-Curry correspondence. In short, models are strange abstract AND concrete things. Any single model is concrete, insofar as it is a collection of objects of some kind living "somewhere" (be it in RAM, in text on screen, in your mind, the Platonic realms, etc), but the only things that make models model-like is that they satisfy the model theory definitions/axioms (there's some more meta-modelling for you...). If we pull in the Howard-Curry correspondence, we see that type classes definitions are meant to be Haskell's preferred method of constructing an axiomatization as a nameable thing of some kind. Then again, we can use modules, or even plain old data types and values (for example, type MethodName = String; data Uneditable obj = Uneditable [MethodName -> (a -> obj )]. "Every" collection of function definitions corresponds to an axiomatization of "something", under the correspondence. What matters is (1) that there is a finite collection of definitions, (2) and they are logically consistent (which in this case means they can be unified and don't touch an undefined Haskell expression) So, back to the code. The "Uneditable" type class is an axiomatization of "something". Conceptually, to make a more general axiomatization, you would have to drop an axiom. Haskell's type classes don't let us do that directly. We have to force it. Forcing takes advantage of that "shiftiness" of models I discussed earlier. Since a type class is an axiomatization (of something...), it is its own model. How do we create less specific models from this model? Well, we have to change the interpretation function a little, so that the axiom we want to drop maps to... "nothing specific", except as a consequence of the interpretations for the other axioms. Consider again the "1 + 1 = 2" example. "1 + 1 = 2" would map to true, but only because it is a consequence of the regular axioms of arithmetic. Indeed, this "dropping" or "keeping" is what the "Maybe" does in "Refactored". If we want to keep the axiom as an axiom, we stick it inside a Just. If we don't, we use "Nothing". The new free model is (potentially) "more" free, because it is constrained by fewer axioms. So if we think of type class definitions as "axiomatizations", we should think of type class instances as equivalent to interpretations for a model. That is, typically, models in Haskell are data types that satisfy the axioms embodied in a type class definition. So what have we done? Well, we have constructed a larger free model, in virtue of constructing a more general axiomatization. It is still up to you to make specific data types instances of the axiomatization, in order to show that they are models of the axioms. [1] "A mathematician is a person who spends 15 minutes to decide that something is obvious." Of course, the reason it might take 15 minutes is that you have to do a depth or breadth first search on arguments, and there are lots of "simple" arguments, each of which is "obvious". My email is an example of why this can be a lengthy search. In it, I have used equivalence results from possibly dozens of fields. The argument is only obvious in retrospect. [2] I am not assuming OO semantics. Haskell values are a fair approximation of what an "object" is in this case. It seems that the CafeObj language's goal is to force these notions together. Fair enough, that's how I thought about OO objects when I did OO programming, to much annoyance from my imperative-thinking colleagues... [3]. If our theory was "Everybody on Haskell Cafe wears hats", our theory would still be true if these hats were all blue. Or a mix of colors. Our language doesn't even go into those possibilities, so they can't be proven or disproven given our axioms. Free objects are interesting in their own right. For example, a free group generated by a set of symbols is the set of possible concatenations of these symbols. Concatenation is an associative operation. The first "big" result of elementary group theory is basically a proof that any group can be represented in terms of a set of permutations. In particular, this means that every free group can be represented in terms of a set of permutations, and vice-versa (though not in a particularly obvious way). That is, the free group generated by a group is a group of permutations on symbols, and contains the original group as a subgroup. This is a common construction where "free objects" occur. Free objects are useful, because they can be thought of as generic.
participants (2)
-
Alexander Solla
-
Patrick Browne