
Yesterday I read a rather interesting paper: http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf It's fascinating stuff, and I *think* I understand the gist of what it's saying. However, the paper is utterly festooned with formulas that look so absurdly over-the-top that they might almost be a spoof of a mathematical formula rather than the real thing. A tiny fraction of the notation is explained in the paper, but the rest is simply "taken to be obvious". The paper also uses several ordinary English words in a way that suggests that they are supposed to have a more specific technical meaning - but I have no idea what. Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...)

Andrew Coppin
Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...)
It's pretty garden-variety programming language/type theory. I can
recommend Benjamin Pierce's "Types and Programming Languages" textbook
for an introduction to the material:
http://www.cis.upenn.edu/~bcpierce/tapl/
G
--
Gregory Collins

On 15/10/2010 09:42 PM, Gregory Collins wrote:
Andrew Coppin
writes: Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...) It's pretty garden-variety programming language/type theory.
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
I can recommend Benjamin Pierce's "Types and Programming Languages" textbook for an introduction to the material: http://www.cis.upenn.edu/~bcpierce/tapl/
If I were to somehow obtain this book, would it actually make any sense whatsoever? I've read too many maths books which assume you already know truckloads of stuff, and utterly fail to make sense until you do. (Also, being a somewhat famous book, it's presumably extremely expensive...) Type theory doesn't actually interest me, I just wandered what the hell all the notation means.

On 25 October 2010 22:10, Andrew Coppin
If I were to somehow obtain this book, would it actually make any sense whatsoever? I've read too many maths books which assume you already know truckloads of stuff, and utterly fail to make sense until you do. (Also, being a somewhat famous book, it's presumably extremely expensive...)
Type theory doesn't actually interest me, I just wandered what the hell all the notation means.
Its a clearly written book, though you would have to read it fairly diligently and possibly work through some of the exercises to get it to make sense. If you've no interest in type systems then you don't have a need for it. Such notations are generally typeset with LaTeX's 'Semantic' package - you could look at the guide to this package by Peter Møller Neergaard and Arne John Glenstrup, it might not help much (but it will be cheaper).

Andrew Coppin
On 15/10/2010 09:42 PM, Gregory Collins wrote:
It's pretty garden-variety programming language/type theory.
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
That's ridiculous. You're comparing apples to oranges: using Haskell and understanding the underlying theory are two completely different things. Put it to you this way: a mechanic can strip apart and rebuild an engine without knowing any of the organic chemistry which explains how and why the catalytic converter works. If you work for Ford, on the other hand... P.S. I did my computer science graduate work in type theory, so I may not be an "average Haskeller" in those terms. By "garden-variety" I meant to say that the concepts, notation, and vocabulary are pretty standard for the field, and I had no trouble reading it despite not having seriously read a type theory paper in close to a decade.
I can recommend Benjamin Pierce's "Types and Programming Languages" textbook for an introduction to the material: http://www.cis.upenn.edu/~bcpierce/tapl/
If I were to somehow obtain this book, would it actually make any sense whatsoever? I've read too many maths books which assume you already know truckloads of stuff, and utterly fail to make sense until you do. (Also, being a somewhat famous book, it's presumably extremely expensive...)
Introductory type theory is usually taught in computer science cirricula
at a 3rd or 4th year undergraduate level. If you understand some
propositional logic and discrete mathematics, then "probably yes",
otherwise "probably not."
G
--
Gregory Collins

2010/10/25 Gregory Collins
Andrew Coppin
writes:
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
That's ridiculous.
That's not so ridiculous in the sense that some people might (wrongly) think they won't understand haskell until they understand at least some of that cryptic material. Many long discussion about Haskell on reddit seem to have a beginner thinking he must "understand monads" before going on. Yes, the famous monads which aren't that complicated at all, still they are part of this dense cryptic material when you're a newbie that used to think he's smart because he knows c, pascal, basic, php , and learned postscript's basics in a few days (Then you start looking at this curiosity called haskell, and you stumple upon haskell-cafe, and then you are humbled.) (I might be talking about the 3 years ago me, here :) )
You're comparing apples to oranges: using Haskell and understanding the underlying theory are two completely different things.
Agree 100%, but it's not automatic to see it that way for a newcomer. David.

On 25/10/2010 10:36 PM, Gregory Collins wrote:
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary. That's ridiculous. You're comparing apples to oranges: using Haskell and understanding the underlying theory are two completely different
Andrew Coppin
writes: things. Put it to you this way: a mechanic can strip apart and rebuild an engine without knowing any of the organic chemistry which explains how and why the catalytic converter works. If you work for Ford, on the other hand...
I didn't say "people think Haskell is scary because type theory looks crazy". I said "people think Haskell is scary because the typical Haskeller thinks that type theory looks *completely normal*". As in, Haskellers seem to think that every random stranger will know all about this kind of thing, and be completely unfazed by it. Go look at how many Haskell blog posts mention type theory, or predicate calculus, or category theory, or abstract algebra. Now go look at how many Java or C++ blog posts mention these things. Heck, even SQL-related blogs tend to barely mention the relational algebra. (Which, arguably, is because actual product implementations tend to deviate rather radically from it...)
P.S. I did my computer science graduate work in type theory, so I may not be an "average Haskeller" in those terms. By "garden-variety" I meant to say that the concepts, notation, and vocabulary are pretty standard for the field, and I had no trouble reading it despite not having seriously read a type theory paper in close to a decade.
OK, fair enough.
If I were to somehow obtain this book, would it actually make any sense whatsoever? I've read too many maths books which assume you already know truckloads of stuff, and utterly fail to make sense until you do. (Also, being a somewhat famous book, it's presumably extremely expensive...) Introductory type theory is usually taught in computer science cirricula at a 3rd or 4th year undergraduate level. If you understand some propositional logic and discrete mathematics, then "probably yes", otherwise "probably not."
I don't even know the difference between a proposition and a predicate. I also don't know exactly what "discrete mathematics" actually covers. Then again, I have no formal mathematical training of any kind. (You'd think holding an honours degree in "computer science" would cover this. But alas, my degree is *really* just Information Technology, despite what it says on the certificate...)

On 26 October 2010 19:29, Andrew Coppin
I don't even know the difference between a proposition and a predicate.
A proposition is an abstraction from sentences, the idea being that e.g. "Snow is white", "Schnee ist weiß" and "La neige est blanche" are all sentences expressing the same proposition. Propositional logic is quite a simple logic, where the building blocks are atomic formulae and the usual logical connectives. An example of a well-formed formula might be "P → Q". It tends to be the first system taught to undergraduates, while the second is usually the first-order predicate calculus, which introduces predicates and quantifiers. Predicates are usually interpreted as properties; we might write "P(x)" or "Px" to indicate that object x has the property P.
I also don't know exactly what "discrete mathematics" actually covers.
Discrete mathematics is concerned with mathematical structures which are discrete, rather than continuous. Real analysis, for example, is concerned with real numbers—"the continuum"—and thus would not be covered. Graph theory, on the other hand, concerns objects (nodes and edges) which have sharp cutoffs—if an edge directly connects two nodes, there are no intermediate nodes, whereas if you consider an interval between any two real numbers, no matter how close, there are more real numbers between them. Computers being the kind of things they are, discrete mathematics has a certain obvious utility. Benedict.

On 26/10/2010 07:54 PM, Benedict Eastaugh wrote:
On 26 October 2010 19:29, Andrew Coppin
wrote: I don't even know the difference between a proposition and a predicate. A proposition is an abstraction from sentences, the idea being that e.g. "Snow is white", "Schnee ist weiß" and "La neige est blanche" are all sentences expressing the same proposition.
Uh, OK.
Propositional logic is quite a simple logic, where the building blocks are atomic formulae and the usual logical connectives. An example of a well-formed formula might be "P → Q". It tends to be the first system taught to undergraduates, while the second is usually the first-order predicate calculus, which introduces predicates and quantifiers.
Already I'm feeling slightly lost. (What does the arrow denote? What's are "the usual logcal connectives"?)
Predicates are usually interpreted as properties; we might write "P(x)" or "Px" to indicate that object x has the property P.
Right. So a proposition is a statement which may or may not be true, while a predicate is some property that an object may or may not possess?
I also don't know exactly what "discrete mathematics" actually covers. Discrete mathematics is concerned with mathematical structures which are discrete, rather than continuous.
Right... so its domain is simply *everything* that is discrete? From graph theory to cellular automina to finite fields to difference equations to number theory? That would seem to cover approximately 50% of all of mathematics. (The other 50% being the continuous mathematics, presumably...)

On Oct 26, 2010, at 12:43 PM, Andrew Coppin wrote:
Propositional logic is quite a simple logic, where the building blocks are atomic formulae and the usual logical connectives. An example of a well-formed formula might be "P → Q". It tends to be the first system taught to undergraduates, while the second is usually the first-order predicate calculus, which introduces predicates and quantifiers.
Already I'm feeling slightly lost. (What does the arrow denote? What's are "the usual logcal connectives"?)
The arrow is notation for "If P, then Q". The other "usual" logical connectives are "not" (denoted by ~, !, the funky little sideways L, and probably others); "or" (denoted by \/, v, (both are pronounced "or" or "vee" even "meet") |, ||, and probably others); "and" (denoted by /\, or a smaller upside-down v (pronounced "wedge" or "and" or even "join"), &, &&, and probably others).
Predicates are usually interpreted as properties; we might write "P(x)" or "Px" to indicate that object x has the property P.
Right. So a proposition is a statement which may or may not be true, while a predicate is some property that an object may or may not possess?
Yes. For any given object a (which is not a "variable" -- we usually reserve x, y, z to denote variables, and objects are denoted by a, b, c), P(a) is a proposition "about" a. Something like "forall x P(x)" means that P(x) is true for every object in the domain you are considering.
I also don't know exactly what "discrete mathematics" actually covers. Discrete mathematics is concerned with mathematical structures which are discrete, rather than continuous.
Right... so its domain is simply *everything* that is discrete? From graph theory to cellular automina to finite fields to difference equations to number theory? That would seem to cover approximately 50% of all of mathematics. (The other 50% being the continuous mathematics, presumably...)
Basically, yes. There are some nuances, in that continuous structures might be studied in terms of discrete structures, and vice-versa.

On 26 October 2010 20:43, Andrew Coppin
Propositional logic is quite a simple logic, where the building blocks are atomic formulae and the usual logical connectives. An example of a well-formed formula might be "P → Q". It tends to be the first system taught to undergraduates, while the second is usually the first-order predicate calculus, which introduces predicates and quantifiers.
Already I'm feeling slightly lost. (What does the arrow denote? What's are "the usual logcal connectives"?)
The arrow is just standard logical notation for the conditional: "if ... then" in English. If you were to read "P → Q" aloud, it would sound like "If P then Q". It's one of the usual logical connectives I mentioned. The others are "∧" (conjunction; "and", in English); "∨" (disjunction; "or". Note that it's inclusive: if both operands are true then the whole expression is true. This is different to how the word "or" functions in everyday English, where it's exclusive: you can have cheesecake or apple pie, but not both); "¬" (negation; "not"--the only unary operator in the usual group of connectives) and "↔" (biconditional; "if and only if"). They are the usual logical connectives purely in virtue of the fact that they're the ones we tend to use most of the time. Historically speaking this is because they seem to map well onto use cases in natural language. However, there are many other possible logical connectives; I have only mentioned a few unary and binary connectives. There are 4 unary operators, 16 binary operators, 256 ternary operators, and in general, 2 ^ 2 ^ n logical connectives for n < ω (i.e. the first infinite ordinal: we only consider operators with a finite number of operands). We could write the four unary operators quite easily in Haskell, assuming that we take them as functions from Bool to Bool:
data Bool = True | False
not :: Bool -> Bool not True = False not False = True
id :: Bool -> Bool id True = True id False = False
true :: Bool -> Bool true _ = True
false :: Bool -> Bool false _ = False
The `true` and `false` functions are constant functions: they always produce the same output regardless of their inputs. For this reason they're not very interesting. The `id` function's output is always identical to its input, so again it's not very interesting. The `not` function is the only one which looks like it holds any interest, producing the negation of its input. Given this it shouldn't surprise us that it's the one which actually gets used in formal languages.
Predicates are usually interpreted as properties; we might write "P(x)" or "Px" to indicate that object x has the property P.
Right. So a proposition is a statement which may or may not be true, while a predicate is some property that an object may or may not possess?
Exactly. The sentence "There is a black beetle" could be taken to express the proposition that there is a black beetle. It includes the predicates "is black" and "is a beetle". We might write this in first-order logic, to make the predicates (and the quantification) more explicit, as "∃x [Black(x) ∧ Beetle(x)]". I.e., "There exists something that is black and is a beetle". I hedged, saying that we usually interpret predicates as properties, because the meaning of an expression in a formal language (or, indeed, any language) depends on the interpretation of that language, which assigns meanings to the symbols and truth-values to its sentences. Benedict.

On 27/10/2010, at 8:43 AM, Andrew Coppin wrote:
Already I'm feeling slightly lost. (What does the arrow denote? What's are "the usual logcal connectives"?)
You mentioned Information Science, so there's a good chance you know something about Visual Basic, where they are called AND IMP OR XOR NOT EQV "connective" in this sense means something like "operator".
Predicates are usually interpreted as properties; we might write "P(x)" or "Px" to indicate that object x has the property P.
Right. So a proposition is a statement which may or may not be true, while a predicate is some property that an object may or may not possess?
A predicate is simply any function returning truth values.
is a (binary) predicate. (> 0) is a (unary) predicate.
Right... so its domain is simply *everything* that is discrete? From graph theory to cellular automina to finite fields to difference equations to number theory?
Here's the table of contents of a typical 1st year discrete mathematics book, selected and edited: - algorithms on integers - sets - functions - relations - sequences - propositional logic - predicate calculus - proof - induction and well-ordering - recursion - analysis of algorithms - graphs - trees - spanning trees - combinatorics - binomial and multinomial theorem - groups - posets and lattices - Boolean algebras - finite fields - natural deduction - correctness of algorithms Graph theory is in. Cellular automata could be but usually aren't. Difference equations are out. Number theory would probably be out except maybe in a 2nd or 3rd year course leading to cryptography.
That would seem to cover approximately 50% of all of mathematics. (The other 50% being the continuous mathematics, presumably...)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Oct 26, 2010, at 4:21 PM, Richard O'Keefe wrote:
Number theory would probably be out except maybe in a 2nd or 3rd year course leading to cryptography.
Number theory is one of those weird cases. They are discrete structures, but advanced number theory uses a lot of complex analysis and "calculus" on other complete spaces, like the p-adics. Difference equations show up in Knuth's "Concrete Mathematics", his tome on discrete mathematics. The theory of difference equations is the discrete analogue to the theory of differential equations. Surprisingly, the continuous/differential case is more general, since integral solutions can be modeled by constant functions.

On 27/10/2010, at 12:55 PM, Alexander Solla wrote:
Difference equations show up in Knuth's "Concrete Mathematics", his tome on discrete mathematics. The theory of difference equations is the discrete analogue to the theory of differential equations. Surprisingly, the continuous/differential case is more general, since integral solutions can be modeled by constant functions.
Graham, Knuth, and Patashnik is one of those books that when they come out are clearly destined to become classics. My copy of the first edition wore out to the point where I was delighted to be able to justify getting a copy of the second. Much of it is, however, well outside the scope of the "discrete mathematics" that one would expect to get in a good undergraduate CS course. The stuff on probabilities and generating functions, for example, would be more commonly met with in Statistics, likely 2nd year.

On 27 October 2010 00:21, Richard O'Keefe
Here's the table of contents of a typical 1st year discrete mathematics book, selected and edited: - algorithms on integers - sets - functions - relations - sequences - propositional logic - predicate calculus - proof ...
Quite a bit of this is covered by "Discrete Mathematics Using a Computer" by John O'Donnell, Cordelia Hall and Rex Page (Springer). Haskell is the implementation language, so effectively it is "Discrete Mathematics Using Haskell". It would be nice if someone wrote a geometry book using Haskell...

* Andrew Coppin:
On 26/10/2010 07:54 PM, Benedict Eastaugh wrote:
On 26 October 2010 19:29, Andrew Coppin
wrote: I also don't know exactly what "discrete mathematics" actually covers. Discrete mathematics is concerned with mathematical structures which are discrete, rather than continuous.
Right... so its domain is simply *everything* that is discrete?
It's a catch-all phrase applied to introductory material from various subjects, such graph theory, number theory, group theory. It's not really well-defined in scope.

On 27/10/2010, at 7:29 AM, Andrew Coppin wrote:
I didn't say "people think Haskell is scary because type theory looks crazy". I said "people think Haskell is scary because the typical Haskeller thinks that type theory looks *completely normal*". As in, Haskellers seem to think that every random stranger will know all about this kind of thing, and be completely unfazed by it.
I came to Haskell from ML. The ML community isn't into category theory (much; Rod Burstall at Edinburgh was very keen on it). But they are very definitely into type theory. The experience of ML was that getting the theory right was the key to getting implementations without major loop-holes. The way type systems are presented owes a great deal to one approach to specifying logics; type *inference* is basically a kind of deduction, and you want type inference to be sound, so you have to define what are valid deduction steps. I came to ML from logic, so it all made perfect sense.

* Gregory Collins:
* Andrew Coppin:
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
That's ridiculous. You're comparing apples to oranges: using Haskell and understanding the underlying theory are two completely different things.
I could imagine that the theory could be quite helpful for accepting nagging limitations. I'm not an experienced Haskell programmer, though, but that's what I noticed when using other languages. [Reading TAPL]
Introductory type theory is usually taught in computer science cirricula at a 3rd or 4th year undergraduate level. If you understand some propositional logic and discrete mathematics, then "probably yes", otherwise "probably not."
I think it entirely depends on the amount of work you're willing to put into it. I couldn't bring myself to do it (my copy is mostly unread), but it still helped me to figure out enough of the notation in an emergency. My take on formalization is somewhat unusual anyway---I view it as a necessary evil to ensure intersubjective verifiability, but if you need it to get any results whatsoever, you either don't understand what you're doing, or it's not worth doing in the first place because you're restating trivialities in a pompous way.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 11/28/10 08:47 , Florian Weimer wrote:
* Gregory Collins:
* Andrew Coppin:
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
That's ridiculous. You're comparing apples to oranges: using Haskell and understanding the underlying theory are two completely different things.
I could imagine that the theory could be quite helpful for accepting nagging limitations. I'm not an experienced Haskell programmer, though, but that's what I noticed when using other languages.
Yes and no; for example, it's enough to know that System F (the type system used by GHC) can't describe dependent types, without needing to know *why*. A brief overview is more useful in this case. This is true of most of the ML-ish languages: they're based on rigorous mathematical principles, but those principles are sufficiently high level that there isn't a whole lot of point in teaching them as part of teaching the languages. The concepts behind other languages are rarely based in anything quite as high level, and moreover often take structural rather than mathematical form, so understanding them *does* help. (An example of this is C++ templates; as I understand it, there *is* mathematics behind them, but many of their behaviors come from their structure rather than the math.) - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkz5ROsACgkQIn7hlCsL25VdGQCeLuDo6HS8sfnFG1EuA4oDO56y 5soAoLexEtjRKYIVFFCpWk86u0/woZGF =Fn2e -----END PGP SIGNATURE-----

On Mon, Oct 25, 2010 at 10:10:56PM +0100, Andrew Coppin wrote:
Type theory doesn't actually interest me, I just wandered what the hell all the notation means.
That sounds like an oxymoron. How could you possibly learn what the notation "means" without learning about the subject that the notation is about? That's like saying "I'm not actually interested in calculus, I'd just like to know what the hell all these funny S-like symbols mean". For what it's worth, I was once in a similar situation (modulo the interest), and sent a similar query: http://groups.google.com/group/comp.lang.functional/browse_frm/thread/bb82dc... Following that, Pierce sent me a draft of his (then upcoming) book, and I found it extremely accessible at my level (at least compared to the other book I studied, Mitchell's "Foundations", which, though full of good information, was a bit hard to digest). So I will add voice to those recommending TAPL. Cheers, Lauri

On 25/10/2010 11:01 PM, Lauri Alanko wrote:
On Mon, Oct 25, 2010 at 10:10:56PM +0100, Andrew Coppin wrote:
Type theory doesn't actually interest me, I just wandered what the hell all the notation means. That sounds like an oxymoron. How could you possibly learn what the notation "means" without learning about the subject that the notation is about? That's like saying "I'm not actually interested in calculus, I'd just like to know what the hell all these funny S-like symbols mean".
You can explain the integral notation in a few short sentences without having to undergo an entire semester of integral calculus training. Hell, the other night I was laying in bed with my girlfriend (who hates mathematics) and I managed to make her understand what a partial derivative is. Now of course if you needed to *use* integral calculus for something, that's another matter entirely. But just to get the gist of what it's about and what it's for is much simpler.
So I will add voice to those recommending TAPL.
OK, well maybe I'll see if somebody will buy it for me for Christmas or something...

On Oct 25, 2010, at 2:10 PM, Andrew Coppin wrote:
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
Maybe, but the notation is still clearer than most programming notations for expressing the same ideas. Most "normal" people don't realize that mathematicians discovered the constructs for expressing structures and computations 50 to hundreds of years before the invention of the semiconductor, or that the mathematician's goal of "elegant" expression is pragmatic. Elegant expressions of a problem and its solutions are the easiest to work with.

On 10-10-25 05:10 PM, Andrew Coppin wrote:
Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary.
How many normal people actively stalk highly specialized academic papers like you do and mistake them for Haskell tutorials to begin with? (Pun: strawman is made of stalks.) However, average programmers using languages with interesting type systems should learn type rule notation sooner or later. It takes only Θ(1) to learn, and its saving in writing and reading is Θ(n) if you will use it n times in the rest of your life (which you do because you are using languages with interesting type systems).

On Oct 25, 2010, at 2:10 PM, Andrew Coppin wrote:
Type theory doesn't actually interest me, I just wandered what the hell all the notation means.
Sorry for the double email. I recommend "Language , Proof, and Logic", by Barwise and Etchemendy. It doesn't go into type theory (directly). It is a book about logic for technical people (it was in use at the Reed College philosophy department for many years -- maybe it still is). It is very approachable. The last part of the book is about "advanced" logic, including model theory, some aspects of forcing (there's a forcing proof of Lowenheim-Skolem), Godel's theorems (they called Godel the world's first hacker, with good reason), and some sections on higher order logics. It doesn't include all of the notation you asked about. But understanding the basics will be very helpful, especially if you understand the underlying concepts. For example, set theory is a first order logic that can be recast as a second order logic. This is more-or-less the origin of type theory (types were originally witnesses to the "level" at which a term is defined -- this will make sense in context). The paper you asked about has a richer "ontology" than ZF -- it promotes more things to "named" "kinds" of terms than plain old ZF. But the promotion is straightforward, and the same logical rules apply. You can get it cheap ($1.70 for the cheapest copy) through alibris.com (I am not too happy with them at the moment, but their prices are low. Especially if you can wait a few weeks for the book)

I think you would enjoy reading (and working) through TAPL[1] and/or
Software Foundations[2] if this interests you.
Cheers,
Thomas
[1] http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162...
[2] http://www.cis.upenn.edu/~bcpierce/sf/
On Fri, Oct 15, 2010 at 1:36 PM, Andrew Coppin
Yesterday I read a rather interesting paper:
http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf
It's fascinating stuff, and I *think* I understand the gist of what it's saying. However, the paper is utterly festooned with formulas that look so absurdly over-the-top that they might almost be a spoof of a mathematical formula rather than the real thing. A tiny fraction of the notation is explained in the , but the rest is simply "taken to be obvious". The paper also uses several ordinary English words in a way that suggests that they are supposed to have a more specific technical meaning - but I have no idea what.
Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...)
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Oct 15, 2010, at 1:36 PM, Andrew Coppin wrote:
Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...)
It's some kind of typed logic with lambda abstraction and some notion of witnessing, using Gertzen (I think!) style derivation notation. Those A |- B things mean A "derives" B. The "|-" is also called a Tee. If your mail client can see underlining, formulas like A, B | A mean: A, B |- A That's where the Tee gets its name. It's a T under A, B. The former notation is better for some uses, since it "meshes" with the method of "truth trees".

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 10/15/10 16:36 , Andrew Coppin wrote:
Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...)
Type theory. It makes my head spin, too, since essentially my only exposure to it so far is Haskell itself. - -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAky5AAUACgkQIn7hlCsL25UxawCePztYYnJLXZS8Cx78H4IdNs4q pG4AnjrRLBkL96gduOhN9AyBJPp+xKSv =IcA6 -----END PGP SIGNATURE-----
participants (12)
-
Albert Y. C. Lai
-
Alexander Solla
-
Andrew Coppin
-
Benedict Eastaugh
-
Brandon S Allbery KF8NH
-
David Virebayre
-
Florian Weimer
-
Gregory Collins
-
Lauri Alanko
-
Richard O'Keefe
-
Stephen Tetley
-
Thomas DuBuisson