3 level hierarchy of Haskell objects

Patrick Browne
Gast [1] describes a 3 level hierarchy of Haskell objects using elementOf from set theory:
value *elementOf* type *elementOf* class
This hierarchy is pretty arbitrary and quickly runs into problems with some type system extensions. You can find out whether the barber of Seville shaves himself. A better hierarchial model is related to universes and uses type relations (assuming a right-associative ':'): value : type : kind : sort : ... value : type : universe 0 : universe 1 : universe 2 : ... A value is of a type. A type is of the first universe (kind). An n-th universe is of the (n+1)-th universe. Type classes can be modelled as implicit arguments.
If we include super-classes would the following be an appropriate mathematical representation?
What is a superclass? What are the semantics? Greets, Ertugrul -- Not to be or to be and (not to be or to be and (not to be or to be and (not to be or to be and ... that is the list monad.

Patrick Browne
If we include super-classes would the following be an appropriate mathematical representation?
What is a superclass? What are the semantics?
I assume that like a normal class a super-class *defines* a set operations for types, but it is not *a set* of types. A sub-class can use the signature and default methods of its super-class. I have no particular super-class in mind.
So you basically just mean class (Functor f) => Applicative f where Functor is a superclass of Applicative? There is really nothing special about that. Notice that type classes are a language feature that is translated to a core language, which is essentially an extended System F_omega. See below.
Rather I am trying to make sense of how these Haskell objects are mathematically related.
They are mainly related by logic, in particular type theory. You may be
interested in System F_omega [1].
[1]: http://en.wikipedia.org/wiki/System_F#System_F.CF.89
Greets,
Ertugrul
--
Key-ID: E5DD8D11 "Ertugrul Soeylemez

On 8/8/12 3:36 PM, Patrick Browne wrote:
On 08/08/12, *Ertugrul Söylemez *
wrote: So you basically just mean
class (Functor f) => Applicative f
Yes, but I want to know if there is a simple mathematical relation between the classes and/or their types
Let us introduce the notion of "sorts". A sort is, essentially, a syntactic class for defining a (logical) language. Sorts cannot, in general, be discussed within the language being described; though the language may have some proxy for referring to them (e.g., the "Set", "Type", and "Prop" keywords in Coq). We will discuss three sorts: Type, Kind, and Context. The sort Kind is a set of all the syntactic expressions denoting traditional kinds; i.e., Kind = {*, *->*->*, (*->*)->*,...} The sort Type is the set of all traditional types[1]; i.e., Type = { t : k | k \in Kind } A type class is a relation on Type. Notably, a type class instantiated with all its arguments is not itself a type! That is, "Functor f" does not belong to Type, it belongs to Context. Indeed, the only constructors of Context are (1) type classes, and (2) the "~" for type equality constraints. The simplest relation is a unary relation, which is isomorphic to a set. Thus, if one were so inclined, one could think of "Functor" as being a set of types[2], namely a set of types of kind *->*. And each instance "Functor f" is a proof that "f" belongs to the set "Functor". However, once you move outside of H98, the idea of type classes as sets of types breaks down. In particular, once you have multi-parameter type classes you must admit that type classes are actually relations on types. The "=>" arrow in type signatures is an implication in the logic that Haskell corresponds to. In particular, it takes multiple antecedents of sort Context, a single consequent of sort S \in {Type, Context}, and produces an expression of sort S. This is different from the "->" arrow which is also implication, but which takes a single antecedent of sort Type (namely a type of kind *) and a single consequent of sort Type (namely a type of kind *), and produces an expression of sort Type (namely a type of kind *). And naturally the "->" on Type should be distinguished from the "->" on Kind: which takes a single antecedent of sort Kind, a single consequent of sort Kind, and produces an expression of sort Kind. Just as Kind exists to let us typecheck expressions in Type, we should also consider a sort ContextKind which exists to let us typecheck the expressions in Context. And, to handle the fact that type classes have arguments of different kinds, ContextKind must have it's own arrow, just as Kind does. [1] Note that for this discussion, Type includes both proper types (i.e., types of kind *) as well as type constructors. [2] This must not be confused with the sense in which kinds can be considered to be sets of types. These two different notions of "sets of types" belong to different sorts. A kind is a set of types which lives in Kind; a unary type-class constraint is a set of types which lives in Context. -- Live well, ~wren

On Wed, 8 Aug 2012, Ertugrul Söylemez
Patrick Browne
wrote: Gast [1] describes a 3 level hierarchy of Haskell objects using elementOf from set theory:
value *elementOf* type *elementOf* class
This hierarchy is pretty arbitrary and quickly runs into problems with some type system extensions. You can find out whether the barber of Seville shaves himself.
A better hierarchial model is related to universes and uses type relations (assuming a right-associative ':'):
value : type : kind : sort : ... value : type : universe 0 : universe 1 : universe 2 : ...
A value is of a type. A type is of the first universe (kind). An n-th universe is of the (n+1)-th universe.
Type classes can be modelled as implicit arguments.
If we include super-classes would the following be an appropriate mathematical representation?
What is a superclass? What are the semantics?
Greets, Ertugrul
I know no Haskell, so my first reactions are likely to fail to grip. There is a type theory from one generation, or perhaps two or three, before our time's New Crazed Type Theory. This is the type theory of the Lower Predicate Calculus and of Universal Algebra, style of Birkhoff's Theorem on Varieties. An introduction to this type theory is presented here: http://en.wikipedia.org/wiki/Signature_%28logic%29 [page was last modified on 27 March 2011 at 16:54] Haskell's type classes look to me to be a provision for declaring a signature in the sense of the above article. An instance of type t which belongs to a type class tc is guaranteed to have certain attendant structures, just as the underlying set of a group is automatically equipped with attendant, indeed defining, operations of 1, *, and ^-1. 1 is a zeroary operation with codomain the set of group elements, * is a binary operation that is, has type g x g -> g, and ^-1 has type g -> g, where g is the type of group elements of our particular group. That this particular group is indeed an instance of the general concept group requires that t be of a type class which guarantees the attendant three group operations 1, *, and ^-1, with types as shown. Note that the usual definition of group has further requirements. These further requirements are called "associativity of *", "1 is an identity for *", and "^-1 is an inverse for *". I think that in Haskell today these requirements must, in many cases, be handled by the programmer by hand, and are not automatically handled by the type system of Haskell. Though, as pointed out in an earlier post, in some cases one can use certain constructions, constructions convenient in Haskell, to guarantee that operations so defined meet the requirements. Here we are close to the distinction between a class of "objects which satisfy a condition" vs "objects with added structure", for which see: http://math.ucr.edu/home/baez/qg-spring2004/discussion.html http://ncatlab.org/nlab/show/stuff,+structure,+property oo--JS.

On 8/8/12 9:41 PM, Jay Sulzberger wrote:
Haskell's type classes look to me to be a provision for declaring a signature in the sense of the above article.
Just to clarify this in the context of my previous post, type classes define signatures in two significantly different ways. (1) The first way is as you suggest: the methods of a type class specify a signature, and for convenience we give that signature a name (i.e., the type class' name). However, this is a signature for the term level of Haskell (i.e., a signature in the Term sort not discussed previously; elements of Type classify elements of Term, just as elements of Kind classify elements of Type). (2) The second way is that, at the type level, the collection of type class names together form a signature. Namely they form the signature comprising the majority of the Context sort. Both senses are important for understanding the role of type classes in Haskell, but I believe that some of Patrick Browne's confusion is due to trying to conflate these into a single notion. Just as terms and types should not be confused, neither should methods and type classes. In both cases, each is defined in terms of the other, however they live in separate universes. This is true even in languages which allow terms to occur in type expressions and allow types to occur in term expressions. Terms denote values and computations (even if abstractly); whereas, types denote collections of expressions (proper types denote collections of term expressions; kinds denote collections of type expressions;...). -- Live well, ~wren

On Thu, 9 Aug 2012, wren ng thornton
On 8/8/12 9:41 PM, Jay Sulzberger wrote:
Haskell's type classes look to me to be a provision for declaring a signature in the sense of the above article.
Just to clarify this in the context of my previous post, type classes define signatures in two significantly different ways.
(1) The first way is as you suggest: the methods of a type class specify a signature, and for convenience we give that signature a name (i.e., the type class' name). However, this is a signature for the term level of Haskell (i.e., a signature in the Term sort not discussed previously; elements of Type classify elements of Term, just as elements of Kind classify elements of Type).
(2) The second way is that, at the type level, the collection of type class names together form a signature. Namely they form the signature comprising the majority of the Context sort.
Both senses are important for understanding the role of type classes in Haskell, but I believe that some of Patrick Browne's confusion is due to trying to conflate these into a single notion. Just as terms and types should not be confused, neither should methods and type classes. In both cases, each is defined in terms of the other, however they live in separate universes. This is true even in languages which allow terms to occur in type expressions and allow types to occur in term expressions. Terms denote values and computations (even if abstractly); whereas, types denote collections of expressions (proper types denote collections of term expressions; kinds denote collections of type expressions;...).
-- Live well, ~wren
Thanks, wren! I am attempting to read the Haskell 2010 standard at http://www.haskell.org/onlinereport/haskell2010/ It is very helpful and much less obscure than I feared it would be. What you say about the levels seems reasonable to me now, and I begin dimly to see an outline of a translation to non-New Type Theory stuff, which may help me to enter the World of New Type Theory. One difficulty which must impede many who study this stuff is that just getting off the ground seems to require a large number of definitions of objects of logically different kinds. (By "logic" I mean real logic, not any particular formalized system.) We must have "expressions", values, type expressions, rules of transformation at the various levels, the workings of the type/kind/context inference system, etc., to get started. Seemingly Basic and Scheme require less, though I certainly mention expressions and values and types and objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction to Scheme. oo--JS.

On 8/13/12 5:42 PM, Jay Sulzberger wrote:
One difficulty which must impede many who study this stuff is that just getting off the ground seems to require a large number of definitions of objects of logically different kinds. (By "logic" I mean real logic, not any particular formalized system.) We must have "expressions", values, type expressions, rules of transformation at the various levels, the workings of the type/kind/context inference system, etc., to get started. Seemingly Basic and Scheme require less, though I certainly mention expressions and values and types and objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction to Scheme.
Indeed, starting with Haskell's type system is jumping in at the deep end. And there isn't a very good tutorial on how to get started learning type theory. Everyone I know seems to have done the "stumble around until it clicks" routine--- including the folks whose stumbling was guided by formal study in programming language theory. However, a good place to start ---especially viz a vis Scheme/Lisp--- is to go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far fewer moving parts. You have type expressions, term expressions, term reduction, and that's it. Other lambda calculi add all manner of bells and whistles, but STLC is the core of what lambda calculus and type systems are all about. So you should be familiar with it as a touchstone. After getting a handle on STLC, then it's good to see the Barendregt cube. Don't worry too much about understanding it yet, just think of it as a helpful map of a few of the major landmarks in type theory. It's an incomplete map to be sure. One major landmark that's suspiciously missing lays about halfway between STLC and System F: that's Hindley--Milner--Damas, or ML-style, lambda calculus.[2] After seeing the Barendregt cube, then you can start exploring in those various directions. Notably, you don't need to think about the kind level until you start heading towards LF, MLTT, System Fw, or CC, since those are were you get functions/reduction at the type level and/or multiple sorts at the type level. Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the starting point and then add some nice things like algebraic data types and type classes (neither of which are represented on the Barendregt cube). This theory is still relatively simple and easy to understand, albeit in a somewhat ad-hoc manner. Modern "Haskell" lives somewhere beyond the top plane of the cube. We have all of polymorphism (aka System F, aka second-order quantification; via -XRankNTypes), most of type operators (i.e., extending System F to System Fw; via type families etc), some dependent types (aka first-order quantification; via GADTs), plus things not represented on the cube (e.g., (co)inductive data types, type classes, etc). Trying to grok all of that at once without prior understanding of the pieces is daunting to be sure. [1] Via Curry--Howard, the pure STLC corresponds to natural deduction for the implicational fragment of intuitionistic propositional logic. Of course, you can add products (tuples), coproducts (Either), and absurdity to get natural deduction for the full intuitionistic propositional logic. [2] That is: STLC extended with rank-1 second-order quantification, which is a sweet spot in the tapestry of expressivity, decidability, et al. I don't think anyone knows exactly _why_ it's so sweet, but it truly is. -- Live well, ~wren

On Wed, 15 Aug 2012, wren ng thornton
On 8/13/12 5:42 PM, Jay Sulzberger wrote:
One difficulty which must impede many who study this stuff is that just getting off the ground seems to require a large number of definitions of objects of logically different kinds. (By "logic" I mean real logic, not any particular formalized system.) We must have "expressions", values, type expressions, rules of transformation at the various levels, the workings of the type/kind/context inference system, etc., to get started. Seemingly Basic and Scheme require less, though I certainly mention expressions and values and types and objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction to Scheme.
Indeed, starting with Haskell's type system is jumping in at the deep end. And there isn't a very good tutorial on how to get started learning type theory. Everyone I know seems to have done the "stumble around until it clicks" routine--- including the folks whose stumbling was guided by formal study in programming language theory.
However, a good place to start ---especially viz a vis Scheme/Lisp--- is to go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far fewer moving parts. You have type expressions, term expressions, term reduction, and that's it.
Yes. The simply-typed lambda-calculus presents as a different sort of thing from the "untyped" lambda calculus, and the many complexly typed calculi. I'd add the list of components of the base machine of STLC these things: 1. The model theory, which is close to the model theory of the Lower Predicate Calculus. 2. The explication of "execution of a program", which is more subtle than anything right at the beginning of the study of the Lower Predicate Calculus. It certainly requires a score of definitions to lay it out clearly. But, to say again, yes the STLC can, like linear algebra 101, be presented in this way: The machine stands alone in bright sunlight. There are no shadows. Every part can be seen plainly. The eye sees all and is satisfied. ad 2: It would be worthwhile to have an introduction to STLC which compares STLC's explication of "execution of a program" with other standard explications, such as these: 1. the often not explicitly presented explication that appears in textbooks on assembly and introductory texts on computer hardware 2. the usually more explicitly presented explication that appears in texts on Basic or Fortran 3. the often explicit explication that appears in texts on Snobol 4. various explications of what a database management system does 5. explications of how various Lisp variants work 6. explications of how Prolog works 7. explications of how general constraint solvers work, including "proof finders"
Other lambda calculi add all manner of bells and whistles, but STLC is the core of what lambda calculus and type systems are all about. So you should be familiar with it as a touchstone. After getting a handle on STLC, then it's good to see the Barendregt cube. Don't worry too much about understanding it yet, just think of it as a helpful map of a few of the major landmarks in type theory. It's an incomplete map to be sure. One major landmark that's suspiciously missing lays about halfway between STLC and System F: that's Hindley--Milner--Damas, or ML-style, lambda calculus.[2]
8. explication of how Hindley--Milner--Damas works
After seeing the Barendregt cube, then you can start exploring in those various directions. Notably, you don't need to think about the kind level until you start heading towards LF, MLTT, System Fw, or CC, since those are were you get functions/reduction at the type level and/or multiple sorts at the type level.
Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the starting point and then add some nice things like algebraic data types and type classes (neither of which are represented on the Barendregt cube). This theory is still relatively simple and easy to understand, albeit in a somewhat ad-hoc manner.
Unexpectedly, to me, missing word in explications of algebraic data types and "pattern matching": "magma".
Modern "Haskell" lives somewhere beyond the top plane of the cube. We have all of polymorphism (aka System F, aka second-order quantification; via -XRankNTypes), most of type operators (i.e., extending System F to System Fw; via type families etc), some dependent types (aka first-order quantification; via GADTs), plus things not represented on the cube (e.g., (co)inductive data types, type classes, etc). Trying to grok all of that at once without prior understanding of the pieces is daunting to be sure.
Yes.
[1] Via Curry--Howard, the pure STLC corresponds to natural deduction for the implicational fragment of intuitionistic propositional logic. Of course, you can add products (tuples), coproducts (Either), and absurdity to get natural deduction for the full intuitionistic propositional logic.
[2] That is: STLC extended with rank-1 second-order quantification, which is a sweet spot in the tapestry of expressivity, decidability, et al. I don't think anyone knows exactly _why_ it's so sweet, but it truly is.
-- Live well, ~wren
Thanks, wren! oo--JS.

When I was last on the best rooftop in the Mid Upper West Side of Manhattan I was told of the work on logical relations. I did not know of this three decades old line of work. I have grabbed http://homepages.inf.ed.ac.uk/gdp/publications/Par_Poly.pdf To me, the style is comfortable and the matter feels solid. That is, the authors and I know some of the same kennings. Thanks Lispers! Thanks Haskellers! oo--JS. On Thu, 16 Aug 2012, Jay Sulzberger wrote:
On Wed, 15 Aug 2012, wren ng thornton
wrote: On 8/13/12 5:42 PM, Jay Sulzberger wrote:
One difficulty which must impede many who study this stuff is that just getting off the ground seems to require a large number of definitions of objects of logically different kinds. (By "logic" I mean real logic, not any particular formalized system.) We must have "expressions", values, type expressions, rules of transformation at the various levels, the workings of the type/kind/context inference system, etc., to get started. Seemingly Basic and Scheme require less, though I certainly mention expressions and values and types and objects-in-the-Lisp-world in my Standard Rant on^W^WIntroduction to Scheme.
Indeed, starting with Haskell's type system is jumping in at the deep end. And there isn't a very good tutorial on how to get started learning type theory. Everyone I know seems to have done the "stumble around until it clicks" routine--- including the folks whose stumbling was guided by formal study in programming language theory.
However, a good place to start ---especially viz a vis Scheme/Lisp--- is to go back to the beginning: the simply-typed lambda-calculus[1]. STLC has far fewer moving parts. You have type expressions, term expressions, term reduction, and that's it.
Yes. The simply-typed lambda-calculus presents as a different sort of thing from the "untyped" lambda calculus, and the many complexly typed calculi.
I'd add the list of components of the base machine of STLC these things:
1. The model theory, which is close to the model theory of the Lower Predicate Calculus.
2. The explication of "execution of a program", which is more subtle than anything right at the beginning of the study of the Lower Predicate Calculus. It certainly requires a score of definitions to lay it out clearly.
But, to say again, yes the STLC can, like linear algebra 101, be presented in this way: The machine stands alone in bright sunlight. There are no shadows. Every part can be seen plainly. The eye sees all and is satisfied.
ad 2: It would be worthwhile to have an introduction to STLC which compares STLC's explication of "execution of a program" with other standard explications, such as these:
1. the often not explicitly presented explication that appears in textbooks on assembly and introductory texts on computer hardware
2. the usually more explicitly presented explication that appears in texts on Basic or Fortran
3. the often explicit explication that appears in texts on Snobol
4. various explications of what a database management system does
5. explications of how various Lisp variants work
6. explications of how Prolog works
7. explications of how general constraint solvers work, including "proof finders"
Other lambda calculi add all manner of bells and whistles, but STLC is the core of what lambda calculus and type systems are all about. So you should be familiar with it as a touchstone. After getting a handle on STLC, then it's good to see the Barendregt cube. Don't worry too much about understanding it yet, just think of it as a helpful map of a few of the major landmarks in type theory. It's an incomplete map to be sure. One major landmark that's suspiciously missing lays about halfway between STLC and System F: that's Hindley--Milner--Damas, or ML-style, lambda calculus.[2]
8. explication of how Hindley--Milner--Damas works
After seeing the Barendregt cube, then you can start exploring in those various directions. Notably, you don't need to think about the kind level until you start heading towards LF, MLTT, System Fw, or CC, since those are were you get functions/reduction at the type level and/or multiple sorts at the type level.
Haskell98 (and the official Haskell2010) take Hindley--Milner--Damas as the starting point and then add some nice things like algebraic data types and type classes (neither of which are represented on the Barendregt cube). This theory is still relatively simple and easy to understand, albeit in a somewhat ad-hoc manner.
Unexpectedly, to me, missing word in explications of algebraic data types and "pattern matching": "magma".
Modern "Haskell" lives somewhere beyond the top plane of the cube. We have all of polymorphism (aka System F, aka second-order quantification; via -XRankNTypes), most of type operators (i.e., extending System F to System Fw; via type families etc), some dependent types (aka first-order quantification; via GADTs), plus things not represented on the cube (e.g., (co)inductive data types, type classes, etc). Trying to grok all of that at once without prior understanding of the pieces is daunting to be sure.
Yes.
[1] Via Curry--Howard, the pure STLC corresponds to natural deduction for the implicational fragment of intuitionistic propositional logic. Of course, you can add products (tuples), coproducts (Either), and absurdity to get natural deduction for the full intuitionistic propositional logic.
[2] That is: STLC extended with rank-1 second-order quantification, which is a sweet spot in the tapestry of expressivity, decidability, et al. I don't think anyone knows exactly _why_ it's so sweet, but it truly is.
-- Live well, ~wren
Thanks, wren!
oo--JS.
participants (5)
-
Ertugrul Söylemez
-
Jay Sulzberger
-
Patrick Browne
-
Richard O'Keefe
-
wren ng thornton