On Thu, Apr 4, 2013 at 3:29 PM, Albert Y. C. Lai <trebla@vex.net> wrote:
On 13-04-04 01:07 AM, wren ng thornton wrote:
When the quantifiers are implicit, we can rely on the unique human ability
to DWIM. This is a tremendous advantage when first teaching people about
mathematical concerns from a logical perspective. However, once people
move beyond the basics of quantification (i.e., schemata, rank-1
polymorphism, etc), things get really hairy and this has lead to no end of
quibbles in philosophy and semantics, where people seem perversely
attached to ill-specified and outdated logics.

On the other hand, with explicit quantification you can't rely on DWIM and
must teach people all the gritty details up front--- since the application
of those details is being made explicit. This is an extremely difficult
task for people who are new to symbolic reasoning/manipulation in the
first place. In my experience, it's better to get people fluently
comfortable with symbolic manipulations before even mentioning
quantifiers.

I agree with most of it. I disagree with the first two sentences. With only one variable, therefore only one implicit quantifier, and even being told that this quantifier is placed outermost, it is already hairy enough. The unique human ability to DWIM (do what *I* mean) as opposed to DWYM (do what *you* mean) can be relied upon for confusion, frustration, students and teachers being at each others' throats, and needing to quibble in semantics which is WDYM (what do you mean) afterall.

Students need to learn to do what the professor means.  That is what they are paying for.

Let me tell you a short story about the first college math class I ever took.  It was a course in introductory analysis, where we started by constructing the natural numbers, and slowly introducing the field axioms, and finally constructing R and C.  Well, sometime during the first week, we encountered "0", a symbol I had obviously seen before.  And I used some properties I knew about zero in a proof.  My professor called me into his office, and told me to treat the circular thing he was talking about in class as, essentially, a symbol whose semantics we are /determining/ in terms of the axioms introduced.

This is the very paradigm of "semantic quibbling" -- what does "0" MEAN?  It depends!  And not just on the axioms in play, but also on the community of language speakers/math doers, and the contexts in which they are discussing a problem.  (Indeed, a week later, I had proved enough about the symbol 0 that it /BECAME/ the zero I knew and loved in high school.

 
In IRC channel #haskell, beginners write like the following and ask why it is a type error:

f :: Bool -> t
f True = 'x'
f False = 'y'

You may think you know what's wrong, but you don't actually know until you know how to clarify to the beginners. Note: harping on the word "any" does not clarify, for the beginners exactly say this:

"Yeah, t can be *any* type, therefore *I* am making it Char. Isn't that what *any* means?"

Indeed.  And this confusion would continue to exist if the quantifier was made explicit.  Notice that the confusion is about what "any" means, and not what "upside-down A" means, or what a "missing" upside-down A means.