
OK, so technically it's got nothing to do with Haskell itself, but... I was reading some utterly incomprehensible article in Wikipedia. It was saying something about categories of recursive sets or some nonesense like that, and then it said something utterly astonishing. By playing with the lambda calculus, you can come up with functions having all sorts of types. For example, identity :: x -> x add :: x -> x -> x apply :: (x -> y) -> (y -> z) -> (x -> z) However - and I noticed this myself a while ago - it is quite impossible to write a (working) function such as foo :: x -> y Now, Wikipedia seems to be suggesting something really remarkable. The text is very poorly worded and hard to comprehend, but they seem to be asserting that a type can be interpreted as a logic theorum, and that you can only write a function with a specific type is the corresponding theorum is true. (Conversly, if you have a function with a given type, the corresponding theorum *must* be true.) For example, the type for "identity" presumably reads as "given that x is true, we know that x is true". Well, duh! Moving on, "add" tells as that "if x is true and x is true, then x is true". Again, duh. Now "apply" seems to say that "if we know that x implies y, and we know that y implies z, then it follows that x implies z". Which is nontrivial, but certainly looks correct to me. On the other hand, the type for "foo" says "given that some random statement x happens to be true, we know that some utterly unrelated statement y is also true". Which is obviously nucking futs. Taking this further, we have "($) :: (x -> y) -> x -> y", which seems to read "given that x implies y, and that x is true, it follows that y is true". Which, again, seems to compute. So is this all a huge coincidence? Or have I actually suceeded in comprehending Wikipedia?

On Tuesday 10 July 2007, Andrew Coppin wrote:
OK, so technically it's got nothing to do with Haskell itself, but...
Actually, it does: the basic technologies underlying Haskell (combinatory logic and the Hindley-Milner type system) were originally invented in the course of this stream of research.
I was reading some utterly incomprehensible article in Wikipedia. It was saying something about categories of recursive sets or some nonesense like that, and then it said something utterly astonishing.
By playing with the lambda calculus, you can come up with functions having all sorts of types. For example,
identity :: x -> x
add :: x -> x -> x
apply :: (x -> y) -> (y -> z) -> (x -> z)
However - and I noticed this myself a while ago - it is quite impossible to write a (working) function such as
foo :: x -> y
Now, Wikipedia seems to be suggesting something really remarkable. The text is very poorly worded and hard to comprehend,
Nothing is ever absolutely so --- except the incomprehensibility of Wikipedia's math articles. They're still better than MathWorld, though.
but they seem to be asserting that a type can be interpreted as a logic theorum, and that you can only write a function with a specific type is the corresponding theorum is true. (Conversly, if you have a function with a given type, the corresponding theorum *must* be true.)
For example, the type for "identity" presumably reads as "given that x is true, we know that x is true". Well, duh!
Moving on, "add" tells as that "if x is true and x is true, then x is true". Again, duh.
Now "apply" seems to say that "if we know that x implies y, and we know that y implies z, then it follows that x implies z". Which is nontrivial, but certainly looks correct to me.
On the other hand, the type for "foo" says "given that some random statement x happens to be true, we know that some utterly unrelated statement y is also true". Which is obviously nucking futs.
Taking this further, we have "($) :: (x -> y) -> x -> y", which seems to read "given that x implies y, and that x is true, it follows that y is true". Which, again, seems to compute.
So is this all a huge coincidence? Or have I actually suceeded in comprehending Wikipedia?
Yes, you have. In the (pure, non-recursive) typed lambda calculus, there is an isomorphism between (intuitionistic) propositions and types, and between (constructive) proofs and terms, such that a term exists with a given type iff a (corresponding) (constructive) proof exists of the corresponding (intuitionistic) theorem. This is called the Curry-Howard isomorphism, after Haskell Curry (he whom our language is named for), and whatever computer scientist independently re-discovered it due to not having figured out to read the type theory literature before doing type theoretic research. Once functional programming language designers realized that the generalization of this to the fragments of intuitionistic logic with logical connectives `and' (corresponds to products/record types) and `or' (corresponds to sums/union types) holds, as well, the prejudice that innovations in type systems should be driven by finding an isomorphism with some fragment of intuitionistic logic set in, which gave us existential types and rank-N types, btw. So this is really good research to be doing. Jonathan Cast http://sourceforge.net/projects/fid-core http://sourceforge.net/projects/fid-emacs

Jonathan Cast wrote:
On Tuesday 10 July 2007, Andrew Coppin wrote:
OK, so technically it's got nothing to do with Haskell itself, but...
Actually, it does
I rephrase: This isn't *specifically* unique to Haskell, as such.
Now, Wikipedia seems to be suggesting something really remarkable. The text is very poorly worded and hard to comprehend,
Nothing is ever absolutely so --- except the incomprehensibility of Wikipedia's math articles. They're still better than MathWorld, though.
Ah, MathWorld... If you want to look up a formula or identity, it's practically guaranteed to be there. If you want to *learn* stuff... forget it!
So is this all a huge coincidence? Or have I actually suceeded in comprehending Wikipedia?
Yes, you have. In the (pure, non-recursive) typed lambda calculus, there is an isomorphism between (intuitionistic) propositions and types, and between (constructive) proofs and terms, such that a term exists with a given type iff a (corresponding) (constructive) proof exists of the corresponding (intuitionistic) theorem. This is called the Curry-Howard isomorphism, after Haskell Curry (he whom our language is named for), and whatever computer scientist independently re-discovered it due to not having figured out to read the type theory literature before doing type theoretic research.
...let us not even go into what constitutes "intuitionistic propositions" (hell, I can't even *pronounce* that!) or what a "constructive" proof is...
Once functional programming language designers realized that the generalization of this to the fragments of intuitionistic logic with logical connectives `and' (corresponds to products/record types) and `or' (corresponds to sums/union types) holds, as well, the prejudice that innovations in type systems should be driven by finding an isomorphism with some fragment of intuitionistic logic set in, which gave us existential types and rank-N types, btw. So this is really good research to be doing.
On the one hand, it feels exciting to be around a programming language where there are deep theoretical discoveries and new design territories to be explored. (Compared to Haskell, the whole C / C++ / Java / JavaScript / Delphi / VisualBasic / Perl / Python thing seems so boring.) On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! >_<

I means that you can view programming as constructing "witnesses" to
"proofs" - programs becoming the (finite) steps that, when followed,
construct a proof.
Intuitionism only allows you to make statements that can be proved
directly - no "Reductio ad absurdum" only good, honest to goodness
constructive computational steps - sounds like programming (and
general engineering) to me.
Powerful when you grasp it - which is why I've spent the last 15 or 20
years considering myself as an intuitionistic semantic philosopher -
reasoning about the meaning of things by constructing their proofs -
great way of taking ideas, refining them into an axiomatic system then
going and making them work.
Take it from me - it is a good approach it generates exploitable ideas
that people fund that make people money!
Neil
On 10/07/07, Andrew Coppin
Jonathan Cast wrote:
On Tuesday 10 July 2007, Andrew Coppin wrote:
OK, so technically it's got nothing to do with Haskell itself, but...
Actually, it does
I rephrase: This isn't *specifically* unique to Haskell, as such.
Now, Wikipedia seems to be suggesting something really remarkable. The text is very poorly worded and hard to comprehend,
Nothing is ever absolutely so --- except the incomprehensibility of Wikipedia's math articles. They're still better than MathWorld, though.
Ah, MathWorld... If you want to look up a formula or identity, it's practically guaranteed to be there. If you want to *learn* stuff... forget it!
So is this all a huge coincidence? Or have I actually suceeded in comprehending Wikipedia?
Yes, you have. In the (pure, non-recursive) typed lambda calculus, there is an isomorphism between (intuitionistic) propositions and types, and between (constructive) proofs and terms, such that a term exists with a given type iff a (corresponding) (constructive) proof exists of the corresponding (intuitionistic) theorem. This is called the Curry-Howard isomorphism, after Haskell Curry (he whom our language is named for), and whatever computer scientist independently re-discovered it due to not having figured out to read the type theory literature before doing type theoretic research.
...let us not even go into what constitutes "intuitionistic propositions" (hell, I can't even *pronounce* that!) or what a "constructive" proof is...
Once functional programming language designers realized that the generalization of this to the fragments of intuitionistic logic with logical connectives `and' (corresponds to products/record types) and `or' (corresponds to sums/union types) holds, as well, the prejudice that innovations in type systems should be driven by finding an isomorphism with some fragment of intuitionistic logic set in, which gave us existential types and rank-N types, btw. So this is really good research to be doing.
On the one hand, it feels exciting to be around a programming language where there are deep theoretical discoveries and new design territories to be explored. (Compared to Haskell, the whole C / C++ / Java / JavaScript / Delphi / VisualBasic / Perl / Python thing seems so boring.)
On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! >_<
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Andrew Coppin wrote:
On the one hand, it feels exciting to be around a programming language where there are deep theoretical discoveries and new design territories to be explored. (Compared to Haskell, the whole C / C++ / Java / JavaScript / Delphi / VisualBasic / Perl / Python thing seems so boring.)
On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! >_<
I agree, it's exciting to use Haskell because of its theoretical underpinning and the sense of it as a lab for PL ideas. The cost of taking part in that (even as an observer) is the background knowledge and common vocabulary you need in order to make sense of a lot of the papers that you may get referred to, presuming you start asking the kind of questions that elicit answers like that. I don't think the amount of background knowledge required is actually that big but if it's missing you will feel like you're going one step forwards and two steps back. The "Getting Started" thread on Lambda the Ultimate is good - maybe we need a wikipage like that but of links to sources of the type theoretical background to Haskell (is there one already? I see "Research Papers", which obviously has a different purpose). I don't know where the best place to start would be but, as I said in another thread Andrew, TAPL is great. Re. Curry-Howard, have a look Simon Thompson's book (online for free) http://www.cs.kent.ac.uk/people/staff/sjt/TTFP/ . Not quick reads (by any means!), but depending on your learning style, better value than asking ad hoc questions and joining the dots via blog posts/wiki pages etc. -- View this message in context: http://www.nabble.com/Very-freaky-tf4057907.html#a11530874 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

On 7/10/07, Jim Burton
Andrew Coppin wrote:
On the one hand, it feels exciting to be around a programming language where there are deep theoretical discoveries and new design territories to be explored. (Compared to Haskell, the whole C / C++ / Java / JavaScript / Delphi / VisualBasic / Perl / Python thing seems so
boring.)
On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! >_<
I agree, it's exciting to use Haskell because of its theoretical underpinning and the sense of it as a lab for PL ideas. The cost of taking part in that (even as an observer) is the background knowledge and common vocabulary you need in order to make sense of a lot of the papers that you may get referred to, presuming you start asking the kind of questions that elicit answers like that. I don't think the amount of background knowledge required is actually that big but if it's missing you will feel like you're going one step forwards and two steps back.
The "Getting Started" thread on Lambda the Ultimate is good - maybe we need a wikipage like that but of links to sources of the type theoretical background to Haskell (is there one already? I see "Research Papers", which obviously has a different purpose).
I don't know where the best place to start would be but, as I said in another thread Andrew, TAPL is great. Re. Curry-Howard, have a look Simon Thompson's book (online for free) http://www.cs.kent.ac.uk/people/staff/sjt/TTFP/ . Not quick reads (by any means!), but depending on your learning style, better value than asking ad hoc questions and joining the dots via blog posts/wiki pages etc.
I'd like throw in another vote for TAPL. I've been reading it lately and it honestly makes type theory feel fairly simple and natural. I think Pierce's writing is very clear, but occasionally the exercises make the problem sound harder than it is and it gets a little confusing. A friend of mine has the same problem with his category theory book.

I'd like throw in another vote for TAPL. I've been reading it lately and it honestly makes type theory feel fairly simple and natural. I think Pierce's writing is very clear, but occasionally the exercises make the problem sound harder than it is and it gets a little confusing. A friend of mine has the same problem with his category theory book.
Same here! I found his Category Theory book quite difficult and I will have to revisit it. I have only just started TaPL, but I am enjoying it thoroughly. Tony Morris http://tmorris.net/

Tony Morris wrote:
I'd like throw in another vote for TAPL. I've been reading it lately and it honestly makes type theory feel fairly simple and natural. I think Pierce's writing is very clear, but occasionally the exercises make the problem sound harder than it is and it gets a little confusing. A friend of mine has the same problem with his category theory book.
Same here! I found his Category Theory book quite difficult and I will have to revisit it. I have only just started TaPL, but I am enjoying it thoroughly.
I once sat down and tried to read about Category Theory. I got almost nowhere though; I cannot for the life of my figure out how the definition of "category" is actually different from the definition of "set". Or how a "functor" is any different than a "function". Or... actually, none of it made sense. It didn't sound terribly interesting anyway. I'll stick to group theory...

On Thursday 12 July 2007 04:40, Andrew Coppin wrote:
I once sat down and tried to read about Category Theory. I got almost nowhere though; I cannot for the life of my figure out how the definition of "category" is actually different from the definition of "set". Or how a "functor" is any different than a "function". Or... actually, none of it made sense.
Iiuc, "Set" is just one type of category; and the morphisms of the category "Set" are indeed functions. But morphisms in other categories need not be functions; in the category "Rel", for example, the morphisms are not functions but binary relations. A "functor" is something that maps functions in one category to functions in another category. In other words, functors point from one or more functions in one category to the equivalent functions in another category. Perhaps they could be regarded as 'meta-functions'. Hope that helps, Alexis.

Almost, I think. A functor is a mapping from the arrows, or morphisms,
in a category to arrows in a category. Identity being a perfectly
cromulent functor.
Category theory is like an (over) generalization of set theory. It
doesn't concern itself with pesky details like what elements and
functions are, it starts with those as just properties of a category.
And, as others have indicated, it's the deep relationship between
category theory and type systems that tends to lead programming
language theorists to investigate it. That and the success of monads,
which from a category theory point of view, are fairly trivial.
-SMD
On 7/12/07, Alexis Hazell
On Thursday 12 July 2007 04:40, Andrew Coppin wrote:
I once sat down and tried to read about Category Theory. I got almost nowhere though; I cannot for the life of my figure out how the definition of "category" is actually different from the definition of "set". Or how a "functor" is any different than a "function". Or... actually, none of it made sense.
Iiuc,
"Set" is just one type of category; and the morphisms of the category "Set" are indeed functions. But morphisms in other categories need not be functions; in the category "Rel", for example, the morphisms are not functions but binary relations.
A "functor" is something that maps functions in one category to functions in another category. In other words, functors point from one or more functions in one category to the equivalent functions in another category. Perhaps they could be regarded as 'meta-functions'.
Hope that helps,
Alexis. _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Alexis Hazell wrote:
On Thursday 12 July 2007 04:40, Andrew Coppin wrote:
I once sat down and tried to read about Category Theory. I got almost nowhere though; I cannot for the life of my figure out how the definition of "category" is actually different from the definition of "set". Or how a "functor" is any different than a "function". Or... actually, none of it made sense.
Iiuc,
"Set" is just one type of category; and the morphisms of the category "Set" are indeed functions. But morphisms in other categories need not be functions; in the category "Rel", for example, the morphisms are not functions but binary relations.
A "functor" is something that maps functions in one category to functions in another category. In other words, functors point from one or more functions in one category to the equivalent functions in another category. Perhaps they could be regarded as 'meta-functions'.
Hope that helps,
It helps a little... I'm still puzzled as to what makes the other categories so magical that they cannot be considered sets. I'm also a little puzzled that a binary relation isn't considered to be a function... From the above, it seems that functors are in fact structure-preserving mappings somewhat like the various morphisms found in group theory. (I remember isomorphism and homomorphism, but there are really far too many morphisms to remember!)

On Thu, Jul 12, 2007 at 07:19:07PM +0100, Andrew Coppin wrote:
Alexis Hazell wrote:
On Thursday 12 July 2007 04:40, Andrew Coppin wrote:
I once sat down and tried to read about Category Theory. I got almost nowhere though; I cannot for the life of my figure out how the definition of "category" is actually different from the definition of "set". Or how a "functor" is any different than a "function". Or... actually, none of it made sense.
"Set" is just one type of category; and the morphisms of the category "Set" are indeed functions. But morphisms in other categories need not be functions; in the category "Rel", for example, the morphisms are not functions but binary relations.
A "functor" is something that maps functions in one category to functions in another category. In other words, functors point from one or more functions in one category to the equivalent functions in another category. Perhaps they could be regarded as 'meta-functions'.
I'm still puzzled as to what makes the other categories so magical that they cannot be considered sets.
I'm also a little puzzled that a binary relation isn't considered to be a function...
From the above, it seems that functors are in fact structure-preserving mappings somewhat like the various morphisms found in group theory. (I remember isomorphism and homomorphism, but there are really far too many morphisms to remember!)
Many categories have more structure than just sets and functions. For instance, in the category of groups, arrows must be homomorphisms. Some categories don't naturally correspond to sets - consider eg the category of naturals, where there is a unique arrow from a to b iff a ≤ b. Binary relations are more general then functions, since they can be partial and multiple-valued. Yes, functors are very much arrow-like - indeed, it is possible to form the "category of small categories" with functors for arrows. ("Small" means that there is a set of objects involved; eg Set is not small because there is no set of all sets (see Russel's paradox) but Nat is small.) Stefan

Stefan O'Rear wrote:
On Thu, Jul 12, 2007 at 07:19:07PM +0100, Andrew Coppin wrote:
I'm still puzzled as to what makes the other categories so magical that they cannot be considered sets.
I'm also a little puzzled that a binary relation isn't considered to be a function...
From the above, it seems that functors are in fact structure-preserving mappings somewhat like the various morphisms found in group theory. (I remember isomorphism and homomorphism, but there are really far too many morphisms to remember!)
Many categories have more structure than just sets and functions. For instance, in the category of groups, arrows must be homomorphisms.
What the heck is an arrow when it's at home?
Some categories don't naturally correspond to sets - consider eg the category of naturals, where there is a unique arrow from a to b iff a ≤ b.
...um...
Binary relations are more general then functions, since they can be partial and multiple-valued.
What's a partial relation?
indeed, it is possible to form the "category of small categories" with functors for arrows. ("Small" means that there is a set of objects involved; eg Set is not small because there is no set of all sets (see Russel's paradox) but Nat is small.)
OK, see, RIGHT THERE! That's the kind of sentence that I read and three of my cognative processes dump sort with an "unexpected out of neural capacity exception". o_O

On Thursday 12 July 2007, Andrew Coppin wrote:
Stefan O'Rear wrote:
On Thu, Jul 12, 2007 at 07:19:07PM +0100, Andrew Coppin wrote:
I'm still puzzled as to what makes the other categories so magical that they cannot be considered sets.
I'm also a little puzzled that a binary relation isn't considered to be a function...
From the above, it seems that functors are in fact structure-preserving mappings somewhat like the various morphisms found in group theory. (I remember isomorphism and homomorphism, but there are really far too many morphisms to remember!)
Many categories have more structure than just sets and functions. For instance, in the category of groups, arrows must be homomorphisms.
What the heck is an arrow when it's at home?
A homomorphism is just a function that preserves the group operation (addition or multiplication, depending on the group); i.e., one that does what you expect when presented with addition or multiplication (or whatever).
Some categories don't naturally correspond to sets - consider eg the category of naturals, where there is a unique arrow from a to b iff a ≤ b.
...um...
Meditate on this at greater length. Here lies the path to enlightenment. Incidentally, I think this lies behind the Curry-Howard isomorphism mentioned earlier. In a category C, the product (a, b) of two objects is the triple (p, p1 :: p -> a, p2 :: p -> b) such that for all triples (c, c1 :: c -> a, c2 :: c -> b) there is precisely one arrow h :: c -> p such that p1 . h = c1 and p2 . h = c2. So, taking Prop to be the category of propositions such that, for all propositions p, q, the pair (p, q) is an arrow from p to q iff p => q, the product (p, q) is the proposition c such that c => p, c => q, and for all propositions d such that d => p and d => q, d => c. Of course, c is just p /\ q.
Binary relations are more general then functions, since they can be partial and multiple-valued.
What's a partial relation?
Let P, Q be sets, let r `subset` (P, Q) be a relation on P and Q, and for p `elem` P, r(p) = { q `elem` Q | (p, q) `elem` r }. Then r is: * total iff for all p `elem` P. r(p) contains at least one element, * partial iff r is not total, * a partial function iff for all p `elem` P. r(p) contains at most one element, * multi-valued iff r is not a partial function, and * a function iff r is a partial function and total.
indeed, it is possible to form the "category of small categories" with functors for arrows. ("Small" means that there is a set of objects involved; eg Set is not small because there is no set of all sets (see Russel's paradox) but Nat is small.)
OK, see, RIGHT THERE! That's the kind of sentence that I read and three of my cognative processes dump sort with an "unexpected out of neural capacity exception". o_O
I'd think you'd expect it by now :) Lessa answered this one competently enough, I think. Jonathan Cast http://sourceforge.net/projects/fid-core http://sourceforge.net/projects/fid-emacs

Jim Burton wrote:
Andrew Coppin wrote:
On the one hand, it feels exciting to be around a programming language where there are deep theoretical discoveries and new design territories to be explored. (Compared to Haskell, the whole C / C++ / Java / JavaScript / Delphi / VisualBasic / Perl / Python thing seems so boring.)
On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! >_<
I agree, it's exciting to use Haskell because of its theoretical underpinning and the sense of it as a lab for PL ideas.
The other downside is that you end up with a world where most of the "tools" are in fact one-man research projects or small toys. There are a few good, powerful, useful things out there. (GHC and Parsec immediately spring to mind.) But there's also a vast number of really tiny projects which don't seem to be terrifically well supported. Kind of makes me sad; Haskell seems almost doomed to be a language with fantastic potential, but little real-world traction.

Andrew Coppin wrote:
Jim Burton wrote:
Andrew Coppin wrote:
On the one hand, it feels exciting to be around a programming language where there are deep theoretical discoveries and new design territories to be explored. (Compared to Haskell, the whole C / C++ / Java / JavaScript / Delphi / VisualBasic / Perl / Python thing seems so boring.)
On the other hand... WHAT THE HECK DOES ALL THAT TEXT *MEAN*?! >_<
I agree, it's exciting to use Haskell because of its theoretical underpinning and the sense of it as a lab for PL ideas.
The other downside is that you end up with a world where most of the "tools" are in fact one-man research projects or small toys.
There are a few good, powerful, useful things out there. (GHC and Parsec immediately spring to mind.) But there's also a vast number of really tiny projects which don't seem to be terrifically well supported. Kind of makes me sad; Haskell seems almost doomed to be a language with fantastic potential, but little real-world traction.
AFAIK Haskell wasn't designed for real-world traction in the first place, but as a way of consolidating FP research efforts onto one platform, so in that sense it's a resounding success rather than "doomed". It also seems to have gained some traction, and we know that FP can be an eminently practical real-world secret weapon, so the tools you're waiting for someone else to write could well be on their way. At the same time, the only evidence for this at the moment is a lot of blogs, O'Reilly investing in a book and Eternal September on haskell-cafe. If you want a language with a bigger user base or that is less confusing, there are plenty to choose from. -- View this message in context: http://www.nabble.com/Very-freaky-tf4057907.html#a11547225 Sent from the Haskell - Haskell-Cafe mailing list archive at Nabble.com.

Jim Burton wrote:
Andrew Coppin wrote:
The other downside is that you end up with a world where most of the "tools" are in fact one-man research projects or small toys.
There are a few good, powerful, useful things out there. (GHC and Parsec immediately spring to mind.) But there's also a vast number of really tiny projects which don't seem to be terrifically well supported. Kind of makes me sad; Haskell seems almost doomed to be a language with fantastic potential, but little real-world traction.
AFAIK Haskell wasn't designed for real-world traction in the first place, but as a way of consolidating FP research efforts onto one platform, so in that sense it's a resounding success rather than "doomed". It also seems to have gained some traction, and we know that FP can be an eminently practical real-world secret weapon, so the tools you're waiting for someone else to write could well be on their way. At the same time, the only evidence for this at the moment is a lot of blogs, O'Reilly investing in a book and Eternal September on haskell-cafe. If you want a language with a bigger user base or that is less confusing, there are plenty to choose from.
The *language* I love. Haskell is usually a joy to program with. The lack of real-world traction can be very frustrating though. It's like I just found the perfect programming language, and I can't really use it for very much... :-(

On Tue, Jul 10, 2007 at 08:19:53PM +0100, Andrew Coppin wrote:
OK, so technically it's got nothing to do with Haskell itself, but...
I was reading some utterly incomprehensible article in Wikipedia. It was saying something about categories of recursive sets or some nonesense like that, and then it said something utterly astonishing.
By playing with the lambda calculus, you can come up with functions having all sorts of types. For example,
identity :: x -> x
add :: x -> x -> x
apply :: (x -> y) -> (y -> z) -> (x -> z)
However - and I noticed this myself a while ago - it is quite impossible to write a (working) function such as
foo :: x -> y
Now, Wikipedia seems to be suggesting something really remarkable. The text is very poorly worded and hard to comprehend, but they seem to be asserting that a type can be interpreted as a logic theorum, and that you can only write a function with a specific type is the corresponding theorum is true. (Conversly, if you have a function with a given type, the corresponding theorum *must* be true.)
For example, the type for "identity" presumably reads as "given that x is true, we know that x is true". Well, duh!
Moving on, "add" tells as that "if x is true and x is true, then x is true". Again, duh.
Now "apply" seems to say that "if we know that x implies y, and we know that y implies z, then it follows that x implies z". Which is nontrivial, but certainly looks correct to me.
On the other hand, the type for "foo" says "given that some random statement x happens to be true, we know that some utterly unrelated statement y is also true". Which is obviously nucking futs.
Taking this further, we have "($) :: (x -> y) -> x -> y", which seems to read "given that x implies y, and that x is true, it follows that y is true". Which, again, seems to compute.
So is this all a huge coincidence? Or have I actually suceeded in comprehending Wikipedia?
Yup, you understood it perfectly. This is precisely the Curry-Howard isomorphism I alluded to earlier. Another good example: foo :: ∀ pred : Nat → Prop . (∀ n:Nat . pred n → pred (n + 1)) → pred 0 → ∀ n : Nat . pred n Which you can read as "For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers.". IE, mathematical induction. Haskell's type system isn't *quite* powerful enough to express the notion of a type depending on a number (you can hack around it with a type-level Peano construction, but let's not go there just yet), but if you ignore that part of the type: foo :: (pred -> pred) -> pred -> Int -> pred {- the int should be nat, ie positive -} foo nx z 0 = z foo nx z (n+1) = nx (foo nx z n) Which is just an iteration function! http://haskell.org/haskellwiki/Curry-Howard-Lambek_correspondence might be interesting - same idea, but written for a Haskell audience. Stefan

Stefan O'Rear wrote:
On Tue, Jul 10, 2007 at 08:19:53PM +0100, Andrew Coppin wrote:
So is this all a huge coincidence? Or have I actually suceeded in comprehending Wikipedia?
Yup, you understood it perfectly.
This is a rare event... I must note it on my calendar! o_O
This is precisely the Curry-Howard isomorphism I alluded to earlier.
Yeah, the article I was reading was called "Curry-Howard isomorphism". But it rambled on for, like, 3 pagefulls of completely opaque set-theoretic gibberish before I arrived at the (cryptically phrased) statements I presented above. Why it didn't just *say* that in the first place I have no idea...
Another good example:
foo :: ∀ pred : Nat → Prop . (∀ n:Nat . pred n → pred (n + 1)) → pred 0 → ∀ n : Nat . pred n
x_x
Which you can read as "For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers.". IE, mathematical induction.
...and to think the idea of mathematical symbols is to make things *clearer*...
Haskell's type system isn't *quite* powerful enough to express the notion of a type depending on a number (you can hack around it with a type-level Peano construction, but let's not go there just yet), but if you ignore that part of the type:
Peano integers are like Church numerals, but less scary. ;-) (I have a sudden feeling that that would make a good quote for... somewhere...)
foo :: (pred -> pred) -> pred -> Int -> pred {- the int should be nat, ie positive -} foo nx z 0 = z foo nx z (n+1) = nx (foo nx z n)
Which is just an iteration function!
Error: Insufficient congative power.
http://haskell.org/haskellwiki/Curry-Howard-Lambek_correspondence might be interesting - same idea, but written for a Haskell audience.
An interesting read - although again a little over my head. I find myself wondering... A polymorphic type signature such as (a -> b) -> a -> b says "given that a implies b and a is true, b is true". But what does, say, "Maybe x -> x" say?

On 7/10/07, Andrew Coppin
But what does, say, "Maybe x -> x" say?
Maybe X is the same as "True or X", where True is the statement that is always true. Remember that the definition is data Maybe X = Nothing | Just X You can read | as 'or', 'Just' as nothing but a wrapper around an X and Nothing as an axiom. So Maybe X -> X says that "True or X" implies X. That's a valid proposition.

On 2007-07-10, Dan Piponi
On 7/10/07, Andrew Coppin
wrote: But what does, say, "Maybe x -> x" say?
Maybe X is the same as "True or X", where True is the statement that is always true. Remember that the definition is
data Maybe X = Nothing | Just X
You can read | as 'or', 'Just' as nothing but a wrapper around an X and Nothing as an axiom.
So Maybe X -> X says that "True or X" implies X. That's a valid proposition.
It is? Doesn't look like it. Unless you just mean "grammatical" by valid, rather than "true". The standard function of this type is "fromJust" fromJust (Just x) = x It's incomplete, of course. fromMaybe :: a -> Maybe a -> a is complete, and (X => (True or X) => X) certainly is a true proposition. -- Aaron Denney -><-

On Tue, 10 Jul 2007, Aaron Denney wrote:
On 2007-07-10, Dan Piponi
wrote: On 7/10/07, Andrew Coppin
wrote: But what does, say, "Maybe x -> x" say?
Maybe X is the same as "True or X", where True is the statement that is always true. Remember that the definition is
data Maybe X = Nothing | Just X
You can read | as 'or', 'Just' as nothing but a wrapper around an X and Nothing as an axiom.
So Maybe X -> X says that "True or X" implies X. That's a valid proposition.
It is? Doesn't look like it. Unless you just mean "grammatical" by valid, rather than "true".
It's true in Haskell - undefined is a valid proof of anything you like, which is of course rather unsound. -- flippa@flippac.org A problem that's all in your head is still a problem. Brain damage is but one form of mind damage.

On Tue, Jul 10, 2007 at 08:59:16PM +0100, Andrew Coppin wrote:
http://haskell.org/haskellwiki/Curry-Howard-Lambek_correspondence might be interesting - same idea, but written for a Haskell audience.
An interesting read - although again a little over my head.
I find myself wondering... A polymorphic type signature such as (a -> b) -> a -> b says "given that a implies b and a is true, b is true". But what does, say, "Maybe x -> x" say?
Given that x may or may not be true, x is definitely true. Which, of course, is absurd. Stefan

Stefan O'Rear wrote:
On Tue, Jul 10, 2007 at 08:59:16PM +0100, Andrew Coppin wrote:
I find myself wondering... A polymorphic type signature such as (a -> b) -> a -> b says "given that a implies b and a is true, b is true". But what does, say, "Maybe x -> x" say?
Given that x may or may not be true, x is definitely true.
Which, of course, is absurd.
...which, presumably, is because a (pure) function with that type might be able to fail?

On Jul 10, 2007, at 15:59 , Andrew Coppin wrote:
b) -> a -> b says "given that a implies b and a is true, b is
I find myself wondering... A polymorphic type signature such as (a - true". But what does, say, "Maybe x -> x" say?
Actually, because parentheses naturally group to the right in type expressions in Haskell, (a -> b) -> a -> b is in fact (a -> b) -> (a -
b), a tautology. (This should be reasonably obvious.)
Maybe x -> x is a risky proposition, in a number of senses. :) It asserts that given something that may or may not be true, it is in fact guaranteed to be true. In the Haskell library this is the "fromJust" function, which throws an exception if x is *not* true (since it clearly can't satisfy the proposition). -- brandon s. allbery [solaris,freebsd,perl,pugs,haskell] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On 11/07/2007, at 9:02 AM, Brandon S. Allbery KF8NH wrote:
On Jul 10, 2007, at 15:59 , Andrew Coppin wrote:
I find myself wondering... A polymorphic type signature such as (a -> b) -> a -> b says "given that a implies b and a is true, b is true". But what does, say, "Maybe x -> x" say?
Actually, because parentheses naturally group to the right in type expressions in Haskell, (a -> b) -> a -> b is in fact (a -> b) -> (a -> b), a tautology. (This should be reasonably obvious.)
Maybe x -> x is a risky proposition, in a number of senses. :) It asserts that given something that may or may not be true, it is in fact guaranteed to be true. In the Haskell library this is the "fromJust" function, which throws an exception if x is *not* true (since it clearly can't satisfy the proposition).
This reminds me of a little joke that Conor McBride had in a post a while ago: unJust :: Maybe wmd -> wmd Cheers, Bernie.

On Tue, 2007-10-07 at 20:59 +0100, Andrew Coppin wrote:
But it rambled on for, like, 3 pagefulls of completely opaque set-theoretic gibberish before I arrived at the (cryptically phrased) statements I presented above. Why it didn't just *say* that in the first place I have no idea...
Because the overwhelming majority of people who teach math know math
well, but do not know teaching well. Sadly it would be better for all
but the highest levels of education to have that reversed. My own
long-standing, deep distaste for the "chicken scratchings" of the pure
maths stems from incredibly smart teachers who had no idea how to
communicate what they knew to those not already there.
--
Michael T. Richter

Michael T. Richter wrote:
On Tue, 2007-10-07 at 20:59 +0100, Andrew Coppin wrote:
But it rambled on for, like, 3 pagefulls of completely opaque set-theoretic gibberish before I arrived at the (cryptically phrased) statements I presented above. Why it didn't just *say* that in the first place I have no idea...
Because the overwhelming majority of people who teach math know math well, but do not know teaching well. Sadly it would be better for all but the highest levels of education to have that reversed. My own long-standing, deep distaste for the "chicken scratchings" of the pure maths stems from incredibly smart teachers who had no idea how to communicate what they knew to those not already there.
At the risk of becoming tangental... When you are really *deeply* knowledgable about something, it can become seriously hard to even realise all the things you're constantly assuming your audience knows. It's so obvious *to you* that it never even crosses your mind that you might need to explain it. Heck, you don't even realise that what you're talking about relies on this concept, since it is so deeply embedded in your mind. I see this *a lot* with computers. People who know lots about computers forget that some people don't know that a "megabyte" is (considerably) bigger than a "kilobyte". Or that having a faster CPU doesn't make Windows load faster. The number of technical documents I've seen that make perfect sense to a knowledgable person, but would be utter gibberish to most normal folk... I've also come across no end of product websites where the authors are so keen to tell you how brilliant their product is and all the cool features it has that they completely forget to explain WHAT THE PRODUCT DOES! For example, FreeNX. I spent *hours* trying to figure out what that actually does...! (In the end, I had to ask somebody. Turns out it does nothing of any interest to me, but still...) I like to think that I'm quite good at explaining technical things using non-technical (but not patronising) language. But I'm probably just kidding myself...

Once during a talk I noticed I was getting strange looks and realized I was using the term "string" too freely with an audience of non- technical people. About half of them were in a beginning linguistics class and could at least handle "trees" later on (which terminology I had thought in advance to explain), but the other half were from humanities, social sciences, etc., with no special training in computing (or linguistics). I had to back-pedal quickly to find a good alternative to "string" (Sequence of characters? Piece of text? "Text" is pretty loaded for some humanities people and would have connotations I wouldn't want.) I'd never before realized how convenient (but obscure!) the term "string" was. Here these poor people were trying to figure out how some thin piece of rope was involved in programming languages ... . -- Fritz On Wed 11 Jul 07, at 11:54 am, Andrew Coppin wrote:
I see this *a lot* with computers. People who know lots about computers forget that some people don't know that a "megabyte" is (considerably) bigger than a "kilobyte". Or that having a faster CPU doesn't make Windows load faster. The number of technical documents I've seen that make perfect sense to a knowledgable person, but would be utter gibberish to most normal folk...

On 10/07/07, Andrew Coppin
Stefan O'Rear wrote:
Another good example:
foo :: ∀ pred : Nat → Prop . (∀ n:Nat . pred n → pred (n + 1)) → pred 0 → ∀ n : Nat . pred n
x_x
Which you can read as "For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers.". IE, mathematical induction.
...and to think the idea of mathematical symbols is to make things *clearer*...
As someone with a background in mathematics, I'd say that the idea of mathematical symbols is to make things more concise, and easier to manipulate mechanically. I'm not entirely certain that their intent is to make things clearer, though often they can make things more precise (which is a bit of a double edged sword when it comes to clarity). I quite often try to avoid symbols as much as possible, only switching to formulas when the argument I'm making is very mechanical or computational. After all, in most circumstances, the reader is going to have to translate the symbols back into concepts and images in their head, and usually natural language is a little farther along in that process, making things easier to read. - Cale

Cale Gibbard wrote:
On 10/07/07, Andrew Coppin
wrote: Stefan O'Rear wrote:
Another good example:
foo :: ∀ pred : Nat → Prop . (∀ n:Nat . pred n → pred (n + 1)) → pred 0 → ∀ n : Nat . pred n
x_x
Which you can read as "For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers.". IE, mathematical induction.
...and to think the idea of mathematical symbols is to make things *clearer*...
As someone with a background in mathematics, I'd say that the idea of mathematical symbols is to make things more concise, and easier to manipulate mechanically. I'm not entirely certain that their intent is to make things clearer, though often they can make things more precise (which is a bit of a double edged sword when it comes to clarity). I quite often try to avoid symbols as much as possible, only switching to formulas when the argument I'm making is very mechanical or computational. After all, in most circumstances, the reader is going to have to translate the symbols back into concepts and images in their head, and usually natural language is a little farther along in that process, making things easier to read.
I always have some beef with the common perception expressed in some of the above. I suppose I can speak concretely by listing again the three different presentations appearing above and comparing them. (A) ∀ pred : Nat → Prop . pred 0 → (∀ n:Nat . pred n → pred (n + 1)) → ∀ n : Nat . pred n (B) For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers. (C) Mathematical induction I have re-formatted (A) and (B) to be fair, i.e., they both receive nice formatting, and both formatting are more or less equivalent, dispelling such myths as "formal logic is unorganized" or "English is unorganized". I have formatted them to be equally well-organized. (C) is the clearest to those people who have already learned it and haven't lost it. I grant you that if you talk to a carefully selected audience, you just say (C) and leave it at that. But if you need to explain (C), saying (C) again just won't do. Most people claim (B) is clearer than (A) when it comes to explaining (C). One unfair factor at work is that most people have spent 10 years learning English; if they have also spent 10 years learning symbols, we would have a fairer comparison. Someone who know C but not Haskell is bound to say that C is clearer than Haskell; we who know both languages know better. (B) is less clear than (A) on several counts, and they are all caused by well-known unclarity problems in most natural languages such as English. "if it applies to a number it applies to the next number" is unclear about whether "a number" is quantified by ∀ or ∃. It could be fixed but I think I know why the author didn't and most people wouldn't. If we actually try to insert "every" or "all" somewhere there, the whole of (B) sounds strange. Part of the strangeness is due to the lack of scope delimiters: Later in (B) there is another "all numbers" coming up, and two "all numbers" too close to each other is confusing. You need scope delimiters to separate them. English provides just a handful of pronouns: I, we, you, he, she, it, they, this, that, these, those. The amount is further decimated when you make mathematical statements (pure or applied). Any statement that needs to refer to multiple subjects must run into juggle trouble. In (B), fortunately 95% of the time is spent around one single predicate, so you can just get by with "it". You can't pull the same trick if you are to explain something else that brings up two predicates and three numbers. To this end, an unlimited supply of variables is a great invention from formal logic. (I am already using this invention by designating certain objects as (A), (B), (C) and thereby referring to them as such. There is no more tractible way.) Of course with the use of variables comes again the issue of scope delimiters. Actually even pronouns can be helped by scope delimiters. English badly lacks and needs scope delimiters. Quantifiers need them, variables need them, and here is one more: nested conditionals such as (B): if if X then Y then Z X implies Y implies Z are unparsible. Perhaps you can insert a few commas (as in (B)) or alternate between the "if then" form and the "implies" form to help just this case: if X implies Y then Z but it doesn't scale. It also proves to be fragile as we insert actual X, Y, Z: if it applies to a number implies it applies to the next number then it applies to all numbers Now, (B) is actually parsible, but that's only because I formatted it thoughtfully. Formatting is a form of scope delimiting, and that solves the problem. But formatting is tricky, as we have all learned from Haskell indentation rules. It is best to combine explicit delimiters such as parentheses with aesthetic formatting. Both the textually inclined and the visually inclined can parse it with ease, and (A) shows it. Actually, I formatted (A) first, since that's more obvious to format; then I copied its formatting to (B). You may counter me with this part: (A') ∀ n : Nat . pred n (B') pred applies to all numbers Surely the introduction of the variable n in (A') is an extra burden, and (B') streamlines it away? Yes, but (A') is not the best expression possible in formal logic anyway. Following the formal logic of HOL (hol.sf.net) and many others, ∀ is just another polymorphic operator: ∀ :: (a -> Prop) -> Prop and furthermore (A') is just syntactic sugar for ∀ (λn : Nat . pred n) which is (A*) ∀ pred Now it is streamlined: It only mentions the two essences of (B'): about pred, and being universally true. No unnecessary variable. If you also want it to say something about Nat, just throw in some types. Cale may counter that I have only been addressing precision, not "clarity" as he defined by ease of reading and inducing images in heads. Besides replying him with "if it doesn't have to be correct, I can write one that's even easier to read" (with apology to Gerald Weinberg), let me play along that definition just a bit. What image should I form in my head, if "a number" may be universal or existential and I don't know which to image, if "it" juggles between three subjects and I don't know which to image at which instance, and if the whole thing lacks scoping so much that it is not even parsible? As I write in another post: "I think of formal logic as clarifying thought and semantics, cleaning up the mess caused by idiosyncracies in natural languages (both syntax and semantics) such as English. But not many people realize they are in a mess needing cleanup." There are two ways to clean up anything. Take governments for example. If a government is corrupted, you could help it and fix it up, or you could start a revolution and remove it. Natural languages are also corrupted when it comes to clarity of thought, reasoning, and semantics. I take the extremist approach of siding with the revolutionaries, and that is starting over with a brand new language divorcing natural words, i.e., formal logic. I do not invest hope in fixing up natural languages because of these sentiments: natural languages are too fragile to fix, natural words are too overloaded to be safe to reuse, and if you fix it up it will look just like formal logic with bloat. It is also a benefit, not a drawback, to learn a different language and broaden one's mind; it is a drawback, not a benefit, to avoid a different language and narrow one's mind. It is also a benefit, not a drawback, to keep natural languages around for its strength: poems, puns, visions, sweeping generalizations; but do let's use a different language for a different task: a logical one for a logical task. Textually-inclined people feel at home with formal logic. Visually-inclined people are disadvantaged, but they are just as disadvantaged with textual natural languages. What they need are visual languages. Formal logic enjoys a more well-defined grammar and is a better basis than natural languages for developing visual languages. It only took humankind a millennium to see that clarity of arithmetic is achieved by forgetting natural language "intuition" and adopting formal language "manipulation", and then the science and engineering of physical hardware took off. It will only take you another millennium to see that clarity of semantics is achieved by forgetting natural language "intuition" and adopting formal language "manipulation", and then the science and engineering of logical software will take off. Already, it has started: http://www.abo.fi/~backrj/index.php?page=Teaching%20mathematics%20in%20high%...

On 12/07/07, Albert Y. C. Lai
Cale Gibbard wrote:
As someone with a background in mathematics, I'd say that the idea of mathematical symbols is to make things more concise, and easier to manipulate mechanically. I'm not entirely certain that their intent is to make things clearer, though often they can make things more precise (which is a bit of a double edged sword when it comes to clarity). I quite often try to avoid symbols as much as possible, only switching to formulas when the argument I'm making is very mechanical or computational. After all, in most circumstances, the reader is going to have to translate the symbols back into concepts and images in their head, and usually natural language is a little farther along in that process, making things easier to read.
I always have some beef with the common perception expressed in some of the above.
This is quite a long reply!
I suppose I can speak concretely by listing again the three different presentations appearing above and comparing them.
(A) ∀ pred : Nat → Prop . pred 0 → (∀ n:Nat . pred n → pred (n + 1)) → ∀ n : Nat . pred n
(B) For all statements about natural numbers, if the statement applies to 0, and if it applies to a number it applies to the next number, then it applies to all numbers.
(C) Mathematical induction
This case does call for some names of things. Here's one way I might write it: Theorem (Mathematical Induction for Naturals): Let P be a predicate on natural numbers. If P(0) is true, and whenever P(k) is true, we have that P(k+1) is true as well, then P(n) holds for every natural number n. I'm fairly sure that I prefer that to both A and B, at least for the purposes of teaching.
I have re-formatted (A) and (B) to be fair, i.e., they both receive nice formatting, and both formatting are more or less equivalent, dispelling such myths as "formal logic is unorganized" or "English is unorganized". I have formatted them to be equally well-organized.
(C) is the clearest to those people who have already learned it and haven't lost it. I grant you that if you talk to a carefully selected audience, you just say (C) and leave it at that. But if you need to explain (C), saying (C) again just won't do.
Most people claim (B) is clearer than (A) when it comes to explaining (C). One unfair factor at work is that most people have spent 10 years learning English; if they have also spent 10 years learning symbols, we would have a fairer comparison. Someone who know C but not Haskell is bound to say that C is clearer than Haskell; we who know both languages know better.
However, if they've spent 10 years on English, and, say, 6 more on learning a formal system, it seems reasonable to take advantage of both, wherever one outstrips the other.
(B) is less clear than (A) on several counts, and they are all caused by well-known unclarity problems in most natural languages such as English.
"if it applies to a number it applies to the next number" is unclear about whether "a number" is quantified by ∀ or ∃. It could be fixed but I think I know why the author didn't and most people wouldn't. If we actually try to insert "every" or "all" somewhere there, the whole of (B) sounds strange. Part of the strangeness is due to the lack of scope delimiters: Later in (B) there is another "all numbers" coming up, and two "all numbers" too close to each other is confusing. You need scope delimiters to separate them.
Just name your variables differently.
English provides just a handful of pronouns: I, we, you, he, she, it, they, this, that, these, those. The amount is further decimated when you make mathematical statements (pure or applied). Any statement that needs to refer to multiple subjects must run into juggle trouble. In (B), fortunately 95% of the time is spent around one single predicate, so you can just get by with "it". You can't pull the same trick if you are to explain something else that brings up two predicates and three numbers. To this end, an unlimited supply of variables is a great invention from formal logic. (I am already using this invention by designating certain objects as (A), (B), (C) and thereby referring to them as such. There is no more tractible way.) Of course with the use of variables comes again the issue of scope delimiters. Actually even pronouns can be helped by scope delimiters.
You can use variables to name things, and you can also use quantifiers without actually using the formal symbols for them.
English badly lacks and needs scope delimiters. Quantifiers need them, variables need them, and here is one more: nested conditionals such as (B): if if X then Y then Z X implies Y implies Z are unparsible. Perhaps you can insert a few commas (as in (B)) or alternate between the "if then" form and the "implies" form to help just this case: if X implies Y then Z but it doesn't scale. It also proves to be fragile as we insert actual X, Y, Z:
if it applies to a number implies it applies to the next number then it applies to all numbers
Now, (B) is actually parsible, but that's only because I formatted it thoughtfully. Formatting is a form of scope delimiting, and that solves the problem. But formatting is tricky, as we have all learned from Haskell indentation rules. It is best to combine explicit delimiters such as parentheses with aesthetic formatting. Both the textually inclined and the visually inclined can parse it with ease, and (A) shows it. Actually, I formatted (A) first, since that's more obvious to format; then I copied its formatting to (B).
Of course, there are nesting limitations to English, but when they get heavy, you will also find that people will automatically start finding new names for things, in order to avoid having to awkwardly insert pauses into their speech to indicate where the brackets should go. To some extent, this is where new abstractions come from, the limitations of our brains in holding onto arbitrarily nested constructions. It's also possible to make (numbered) lists of conditions or results, which makes it more apparent where each component begins and ends. There are phrases like "The following are equivalent". And if all that fails, you can of course still fall back on formal symbols, but you're probably better off making a new definition or two.
You may counter me with this part:
(A') ∀ n : Nat . pred n (B') pred applies to all numbers
Not really, I actually like that variable to be there sometimes too.
Surely the introduction of the variable n in (A') is an extra burden, and (B') streamlines it away? Yes, but (A') is not the best expression possible in formal logic anyway. Following the formal logic of HOL (hol.sf.net) and many others, ∀ is just another polymorphic operator: ∀ :: (a -> Prop) -> Prop and furthermore (A') is just syntactic sugar for ∀ (λn : Nat . pred n) which is
(A*) ∀ pred
Now it is streamlined: It only mentions the two essences of (B'): about pred, and being universally true. No unnecessary variable. If you also want it to say something about Nat, just throw in some types.
I must admit that I'm far more used to first order logic with some set theoretic axioms stacked on top of it.
Cale may counter that I have only been addressing precision, not "clarity" as he defined by ease of reading and inducing images in heads. Besides replying him with "if it doesn't have to be correct, I can write one that's even easier to read" (with apology to Gerald Weinberg), let me play along that definition just a bit. What image should I form in my head, if "a number" may be universal or existential and I don't know which to image, if "it" juggles between three subjects and I don't know which to image at which instance, and if the whole thing lacks scoping so much that it is not even parsible?
The point is that if that's really imprecise in the sense that the intended listener might mistake it to mean something else, then the text should certainly be improved, but it's easy to use quantifiers in mathematical English too. We can say "for all numbers n" or "there exists a number n", or some variation meaning one of these. We can also say things like "Let n be a number", or "Suppose there were an n such that P(n)."
As I write in another post:
"I think of formal logic as clarifying thought and semantics, cleaning up the mess caused by idiosyncracies in natural languages (both syntax and semantics) such as English. But not many people realize they are in a mess needing cleanup."
There are two ways to clean up anything. Take governments for example. If a government is corrupted, you could help it and fix it up, or you could start a revolution and remove it.
Natural languages are also corrupted when it comes to clarity of thought, reasoning, and semantics. I take the extremist approach of siding with the revolutionaries, and that is starting over with a brand new language divorcing natural words, i.e., formal logic. I do not invest hope in fixing up natural languages because of these sentiments: natural languages are too fragile to fix, natural words are too overloaded to be safe to reuse, and if you fix it up it will look just like formal logic with bloat. It is also a benefit, not a drawback, to learn a different language and broaden one's mind; it is a drawback, not a benefit, to avoid a different language and narrow one's mind. It is also a benefit, not a drawback, to keep natural languages around for its strength: poems, puns, visions, sweeping generalizations; but do let's use a different language for a different task: a logical one for a logical task.
Yeah, so let's suppose you don't have a blackboard handy, let's say you're talking to your friend on the phone. How do you convey mathematical statements? I certainly hope that you don't describe each character in turn! You'll probably render the logic into some spoken form, which is quite likely to be just the sort of thing which I'd tend to write down in common use. Don't get me wrong, when it's appropriate, I'll actually use explicit quantifier symbols (∀,∃), and symbolic forms of things like "implies", "and", "or", and so on, but usually this is when either I'm simply unravelling some definitions, or I feel that a particularly symbolic/computational approach to logic seems to be the best level at which to consider the given statements. Usually though, I like to keep the reader's mind focused on the subject matter at hand, and not the formal logic framework that we happen to be using. When you're specifically doing mathematical logic, of course it's more appropriate to use the symbolic form, since the whole point is in the manipulation and discussion of symbolic sentences. When you're doing geometry or combinatorics, it's generally inappropriate to get involved in minutiae like that too often.
Textually-inclined people feel at home with formal logic. Visually-inclined people are disadvantaged, but they are just as disadvantaged with textual natural languages. What they need are visual languages. Formal logic enjoys a more well-defined grammar and is a better basis than natural languages for developing visual languages.
I'm not entirely sure this last claim is true. For me at least, the visual language which I use to reason about mathematical objects is almost entirely divorced from the original English or formal statements, except that each of the components readily has an interpretation in terms of them. Let's take a simple example from the context of topology. Consider the perhaps still somewhat vague statement: Let S be a surface obtained by removing an open disc from each of the torus and the projective plane, and gluing a cylinder between the two circles at the boundaries of the removed discs. This is a translation into English of some shapes in my head. Note that this involves no explicit maps of any kind. I don't even really mention that there's a quotient, except for using the word 'gluing'. Let's assume we don't yet have some operation which does this all formally. You can probably still see quite clearly what I want, geometrically. You can also probably formalise it for yourself, perhaps in even more generality than this specific case. I would say that, among the right audience, it does a fairly adequate job of defining a surface up to homeomorphism, putting it in the heads of the people in the room. I can go on to talk about paths in that space fairly adequately, and even perhaps draw a picture of it without people shouting that it's not at all the picture that they had (at least insofar as I can draw a recognisable picture of the projective plane!). If I *do* happen to have the operation which does this formally, then by saying what I've said above, I convey that I want that operation to those people who are aware of it, and I convey something which is still reasonably meaningful to those who don't -- they might even be able to determine what the formal definition must be just from that. On the other hand, if I'm reasonably sure that everyone at hand is aware of the formalism, I might say: "Let S be the surface obtained by taking the connected sum of the projective plane and the torus." Or even the symbolic: "Let S = P # T", requiring a little more context, of course. These are both far more concise and far more opaque to those unfamiliar with the context. To define the connected sum more formally would take just a little more time and care, which may or may not be worthwhile depending on how detailed a discussion one wants to get into. Obviously, there are many circumstances which warrant that, and any extended period of work would. As an introduction to the whole idea of the connected sum, studying an example similar to this might be useful before moving to the general idea. Even when doing so, writing the definition of the connected sum in terms of formal logic will generally not make it clearer. In any proper treatment though, the quotient and the isomorphism between the two circles would be made explicit. I might remove the cylinder from the formalisation, since it is mostly there as a visual aid rather than something which is mathematically necessary. I might even manage to bore people by spelling the details out though. 'Oh, come on, we know what you mean already! It was enough to say "glue the circles together!"' My point is not that the formalism shouldn't be there, somewhere. It's just that writing it down is often an exercise in tedium, for any but the most primitive of tasks. If you don't believe it, have a look at metamath.org. While that level of rigour is a worthy goal as a separate project, it's not something you'll ever convince any mathematician to do on an everyday basis in their own work, unless the person in question is a hardcore logician. Even then, it's likely going to be an uphill battle. The reason is that this sort of translation into the mechanical is usually extremely boring! It can be complicated certainly, but usually long before you've finished, perhaps even before you've begun, you've convinced yourself that it's doable. The rest is busy work, and typically doesn't serve to elucidate anything apart from possibly some details about the formalism in question rather than the theorem you're embedding into it. My algebraic topology professor had a word for the sort of language which is typically used, and what most mathematicians are aiming for: rigourisable. Specifically, the goal is to pin enough details down so that anyone in the audience could sit down and, with knowledge of a particular formalism, turn it into something that a computer would accept. Humans are at present much better than machines at bridging all the small gaps in a consistent fashion (like applying the appropriate isomorphisms and so on), and there's little reason that we should suffer to communicate with each other at their level. On the other hand, for many modern papers, putting some extra details somewhere seems like something worthwhile, and you might even be able to get graduate students to do a bit of it. Writing everything into a formal, machine-checkable system still seems pretty unlikely though. There's also a good philosophical reason for working in a way which is slightly detached from any particular formalism when possible -- any particular formalism for mathematics might happen to be inconsistent. The normal level of detachment from formalism would cushion many results from the shock-waves an inconsistency would send through every branch of mathematics, as it is quite likely that many things will still have justifiable translations in terms of whatever new system people would then come up with. Things written directly at the level of the formalism would be comparatively brittle, and possibly need translation up into softer language and then back down into the new system. Anyway, that's just my own perspective on the matter. Sorry for going on for so long in this somewhat off-topic discussion. :)
participants (18)
-
Aaron Denney
-
Albert Y. C. Lai
-
Alexis Hazell
-
Andrew Coppin
-
Bernie Pope
-
Brandon S. Allbery KF8NH
-
Cale Gibbard
-
Creighton Hogg
-
Dan Piponi
-
Fritz Ruehr
-
Jim Burton
-
Jonathan Cast
-
Michael T. Richter
-
Neil Davies
-
Philippa Cowderoy
-
Stefan O'Rear
-
Steve Downey
-
Tony Morris