Haskell is a declarative language? Let's see how easy it is to declare types of things.

I absolutely love to use Haskell when teaching (and I have several years of experience doing it). And I absolutely dislike it when I have to jump through hoops to declare types in the most correct way, and in the most natural places. This is hard to sell to the students. - Examples: 1. for explicit declaration of type variables, as in reverse :: forall (a :: *) . [a] -> [a] I have to switch on RankNTypes and/or KindSignatures (ghc suggests). C'mon, this has nothing to do with ranks per se. It's a type of a very simple function! IMHO even Java/C# do this better (with slightly strange syntax, but at least you get to declare the type variable). 2. for declaring types of local variables, as in \ (xs :: [Bool]) -> ... I have to enable PatternSignatures (actually ghc suggests ScopedTypeVariables but again there is no type variable in sight) I need to do this often, to disambiguate properties for Smallcheck. All of this just because it seemed, at some time, a clever idea to allow the programmer to omit quantifiers? (I know, mathematicians do this all over the place, but it is never helpful, and especially not when teaching.) I think that (1) implies (2): there could be no ambiguity about scopes (of typevars in patterns) if each typevar had to be introduced by explicit quantification. Then (in a pattern signature) a use could be distinguished from a declaration. Yes, the above is a rant. I can achieve what I want with some ghc options, and perhaps my point is that these should be on by default - or have better names, because presently, they are unintelligible to the (beginner) student.

Hi Johannes. I know this isn't really an answer, but ...
1. for explicit declaration of type variables, as in
reverse :: forall (a :: *) . [a] -> [a]
I have to switch on RankNTypes and/or KindSignatures (ghc suggests).
... you can do this with "ExplicitForall" rather than "RankNTypes", which indeed does not enable rank-n types, but only allows you to use the "forall" syntax. Cheers, Andres -- Andres Löh, Haskell Consultant Well-Typed LLP, http://www.well-typed.com

Hi Johannes, Johannes Waldmann wrote:
I absolutely dislike it when I have to jump through hoops to declare types in the most correct way, and in the most natural places.
reverse :: forall (a :: *) . [a] -> [a] \ (xs :: [Bool]) -> ...
All of this just because it seemed, at some time, a clever idea to allow the programmer to omit quantifiers?
As I understand it, in ML, it seemed to be a clever idea to not have type signatures at all. From the type-theoretic point of view, I guess this is related to your view of what a polymorphic function is. One view says that a polymorphic function is qualitatively different from each of its instances and that forall is a type constructor, so there should be introduction and elimination forms for terms of that type. Instantiation and generalization are explicit operations with computational content, that is, they have some effect at runtime. This view leads to System F with its explicit type abstraction and type application forms. The other view says that a polymorphic function is the union of all of its instances and that instantation and generalization are implicit, structural operations that have no computational content, that is, they do not affect what happens at runtime. This view leads to ML with its very implicit handling of polymorphism. It seems that in Haskell, we started with the ML-ish view that polymorphism is an implicit, structural thing, but we moved further and further towards the System-F-ish view that polymorphism is an explicit, computational thing. A good indicator for this is the Monomorphism restriction, which supposedly helps beginners to cope with the computational effects of polymorphism. So apparently, there *are* such effects. Another indicator is the type classes in that they attach further computational content with type variables. Tillmann

Hi Tillmann,
On Wed, Apr 3, 2013 at 11:59 PM, Tillmann Rendel
From the type-theoretic point of view, I guess this is related to your view of what a polymorphic function is.
Do you have a reference to the previous conversation?
but we moved further and further towards the System-F-ish view that polymorphism is an explicit, computational thing.
While that may be true, I read the original email as merely referring to the explicit annotation of types (where they are customarily neither seen nor written) as a particular pedagogical approach for new students, something which has much to recommend for. Which seems miles away from what you're alluding to. Full-blown type-level programming? Operational semantics at the type-level? I'm not sure. -- Kim-Ee

Hi Kim-Ee, Kim-Ee Yeoh wrote:
[...] I guess this is related to your view of [...]
Do you have a reference to the previous conversation?
Sorry, I mean "related to one's view of", not "related to Johannes Waldmanns' view of".
Which seems miles away from what you're alluding to. Full-blown type-level programming? Operational semantics at the type-level? I'm not sure.
That's not what I tried to allude to. I mean the operational semantics of instantiation and generalization *at the term level*. In System F, there are two contraction rules: The usual β rule (λx : τ . e1) e2 ~> subst e2 for x in e1 and an additional β-like rule for type application and abstraction: (Λα . e) [τ] ~> subst τ for α in e So in System F, type application and type abstraction have computational content. I think this can become visible in GHC Haskell as well, but I cannot find an example without type classes. Maybe I'm wrong. Tillmann

On 4/04/2013, at 5:59 AM, Tillmann Rendel wrote:
As I understand it, in ML, it seemed to be a clever idea to not have type signatures at all.
Wrong. In ML, it seemed to be a clever idea not to *NEED* type signatures, and for local definitions they are very commonly omitted. But you can certainly HAVE type signatures, and for modules ('structures') it is normal to have them in the interfaces ('signatures'). signature SOME_SIG = sig val f : int -> int list -> int val a : int end structure Some_Struct : SOME_SIG = struct fun f i [] = i | f i (x:xs) = f (i+x) xs val a = 42 end What ML does not have is type *quantifiers*. For that matter, you could say that ML has one type class: Eq.

Hi, Richard A. O'Keefe wrote:
As I understand it, in ML, it seemed to be a clever idea to not have type signatures at all.
Wrong. In ML, it seemed to be a clever idea not to *NEED* type signatures, and for local definitions they are very commonly omitted.
In the ML I used, I remember that it was syntactically impossible to use type signatures directly adjacent to a local definition. Instead, I was thaught to put something like a type signature in a comment, approximately like this: (* getOpt : 'a option * 'a -> 'a *) fun getOpt (SOME x, y) = x | getOpt (NONE, y) = y An example of this style can be found in Danvy and Nielsen, Defunctionalization at Work, Proc. of PPDP 2001 (extended version available at ftp://ftp.daimi.au.dk/pub/BRICS/Reports/RS/01/23/BRICS-RS-01-23.pdf).
But you can certainly HAVE type signatures, and for modules ('structures') it is normal to have them in the interfaces ('signatures').
In my opinion, a signature for a structure would not count as "directly adjacent". Also, while I don't know much about the history of ML, I suspect that structures and signatures were a later addition to the core language. I just checked Milner, Tofte and Harper, The Definition of Standard ML, MIT Press 1990 (available at http://www.itu.dk/people/tofte/publ/1990sml/1990sml.html), and learned that we can have explicit type annotations for both patterns and expressions, so the following variants of the above are possible in Standard ML: 1. We can embed parts of the signature into the definition: fun getOpt (SOME (x : 'a), y : 'a) : 'a = x | getOpt (NONE, y : 'a) : 'a = y With decomposing the type signature into its parts, and then duplicating these parts, this is not very attractive. 2. We can do better by avoiding the derived form for function bindings: val getOpt : 'a option * 'a -> 'a = fn (SOME x, y) => x | (NONE, y) => y This looks ok to me, and I wonder why I was not thaught this style, at least as an alternative. The benefit over type signatures in comments is clear: The type checker will check that the signatures are accurate, and there is a chance that error messages contain type variables chosen by the programmer instead of automatically generated names. Tillmann

On 5/04/2013, at 1:22 AM, Tillmann Rendel wrote:
Hi,
Richard A. O'Keefe wrote:
As I understand it, in ML, it seemed to be a clever idea to not have type signatures at all.
Wrong. In ML, it seemed to be a clever idea not to *NEED* type signatures, and for local definitions they are very commonly omitted.
In the ML I used, I remember that it was syntactically impossible to use type signatures directly adjacent to a local definition.
Notice that the goal-posts just got moved far far away. The original complaint was that ML did not allow type signatures. Now the complaint is that it does not allow them to be "directly adjacent".
Instead, I was thaught to put something like a type signature in a comment, approximately like this:
(* getOpt : 'a option * 'a -> 'a *) fun getOpt (SOME x, y) = x | getOpt (NONE, y) = y
Well, that's a bit silly, isn't it? Put getOpt in a structure where it belongs, and the type in the signature, where the compiler can enforce it. A type signature that is not enforced is a type signature you cannot trust.
But you can certainly HAVE type signatures, and for modules ('structures') it is normal to have them in the interfaces ('signatures').
In my opinion, a signature for a structure would not count as "directly adjacent". Also, while I don't know much about the history of ML, I suspect that structures and signatures were a later addition to the core language.
The history of ML is not so hard to find out. Structures and signatures have been part of "Standard" ML since the 1980s. The ML in Edinburgh LCF and Luca Cardelli's VAX ML were significantly different languages from SML, but VAX ML at least did have modules. The distinction between "Core" and "Modules" in old SML documentation has to do with convenience of specification more than anything else.
I just checked Milner, Tofte and Harper, The Definition of Standard ML, MIT Press 1990 (available at http://www.itu.dk/people/tofte/publ/1990sml/1990sml.html),
That's the old out-of-date edition. The current version of the language is SML'97. However, this much is the same: - there are declarations that provide *interfaces* (grammatical category "spec" -- see page 13), which may occur in signatures - and declarations that provide *values* (grammatical category "dec" -- see page 8), which may occur in structures, at top level, and elsewhere So it is definitely true that, by design, SML "type signatures" (strictly speaking, "valdescs") are segregated from SML function definitions ("valbinds" or "fvalbinds"). This is pretty much a consequence of SML having a very expressive module system in which modules _have_ signatures.
and learned that we can have explicit type annotations for both patterns and expressions, so the following variants of the above are possible in Standard ML:
1. We can embed parts of the signature into the definition:
fun getOpt (SOME (x : 'a), y : 'a) : 'a = x | getOpt (NONE, y : 'a) : 'a = y
Perhaps better: fun get_opt (SOME x, _) = x | get_opt (NONE, y) = y val getOpt : 'a option * 'a -> 'a = get_opt which can be written local fun getOpt(SOME x, _) = x | getOpt(NONE, y) = y in val getOpt: 'a option * 'a -> 'a = getOpt end
With decomposing the type signature into its parts, and then duplicating these parts, this is not very attractive.
This is a sure sign that you are NOT using the language the way it was intended to be used, and maybe it isn't the language that's wrong.
2. We can do better by avoiding the derived form for function bindings:
val getOpt : 'a option * 'a -> 'a = fn (SOME x, y) => x | (NONE, y) => y
This looks ok to me, and I wonder why I was not thaught this style,
Because it is atrociously ugly. (Aside.) "val" here should be "val rec" in general. (End Aside). ML is designed to have type specifications for functions *inside signatures*. Since functions are supposed to be declared inside structures, and structures are supposed to *have* signatures, it would be rather silly to write function type specifications twice. So ML is *not* designed to encourage you to do that.

On Wed, Apr 3, 2013 at 8:28 AM, Johannes Waldmann < waldmann@imn.htwk-leipzig.de> wrote:
All of this just because it seemed, at some time, a clever idea to allow the programmer to omit quantifiers? (I know, mathematicians do this all over the place, but it is never helpful, and especially not when teaching.)
There's your problem. Mathematicians do this specifically because it is helpful. If anything, explicit quantifiers and their interpretations are more complicated. People seem to naturally get how scoping works in mathematics until they have to figure out free and bound variables.

On 13-04-03 07:39 PM, Alexander Solla wrote:
There's your problem. Mathematicians do this specifically because it is helpful. If anything, explicit quantifiers and their interpretations are more complicated. People seem to naturally get how scoping works in mathematics until they have to figure out free and bound variables.
Quantifiers are complicated, but I don't see how explicit is more so than implicit. If anything, it should be the other way round, since correctly interpreting the implicit case incurs the extra step of first correctly guessing how to explicate. (If it doesn't have to be correct, I know how to do it 100 times simpler, to paraphrase Gerald Weinberg.) I have just seen recently prove or disprove: for all e>0, there exists d>0, if 0

Albert Y. C. Lai
Quantifiers are complicated, but I don't see how explicit is more so than implicit. [...] I have just seen recently [...]
Great example. I completely agree. My feeling is that mathematicians use this principle of leaving out some of the quantifiers and putting some others in the wrong place as a cultural entry barrier to protect their field from newbies. Well, not "use", but "willingly tolerate", perhaps. (I do have a diploma in mathematics, from a German university.)

On Thu, Apr 04, 2013 at 11:02:34AM +0000, Johannes Waldmann wrote:
My feeling is that mathematicians use this principle of leaving out some of the quantifiers and putting some others in the wrong place as a cultural entry barrier to protect their field from newbies.
Albert showed an example of leaving out quantifiers, but I didn't see an example of quantifiers in the wrong place. Did I miss one? Tom

On Thu, Apr 04, 2013 at 01:15:27PM +0000, Johannes Waldmann wrote:
Tom Ellis
writes: I didn't see an example of quantifiers in the wrong place.
The example was:
every x satisfies P(x,y) for some y
Oh I see. I interpreted that as lack of disambiguating parentheses, but it's not unreasonable to assert that quantifiers should always come before formulas. Syntax certainly is hard. I remember being completely puzzled the first time I saw / | dx (long expression in x) / when I expected / | (long expression in x) dx / (Those are integral signs, by the way!) Tom

(Folks, let's rescue this increasingly tendentious thread.) Some points to ponder: (1) "Any" can be often be clarified to mean "all", depending on how polymorphic functions are exegeted. In a homotopy-flavored explanation of natural transformation, its components (read "parametric instances") exist /all at once/ like the collinear rays of a rainbow. So given this:
f :: Bool -> t f True = 'x' f False = 'y'
nope, no rainbow. :(
... the aftermath flame war of "why didn't you say it earlier?" "because it's obvious!" "no it's not!" "yes it is!" "is not!" "but all mathematicians find it obvious!" "well then I am not a mathematician and will never be!"
And more to the point, an excellent learning environment requires empathy from all participants, juniors and seniors alike. Where diversity is celebrated as a source of new ideas and new ways to explain old ones. And where the coupling between the two is as gut-obvious as day and night. (2)
prove or disprove: for all e>0, there exists d>0, if 0
I grant you that clearly the implicit quantifiers for a and b are "for all". The unclear part is where to put them. There are essentially 4 choices
That's a stretch. It's all in the context, and here it's clearly a continuity verification exercise from freshman calculus. Unless being deliberately obtuse, one has no excuse not inferring where the quantifiers go if one knows about a theorem prover, what more wielding one to nuke this gnat of a proof. Moreover, if we grant the imaginary student the rudiments of logic, 3 of the 4 "choices" render the statement vacuously true. Almost. Set d to deny the antecedent, QED. I partly agree with Albert's main point, notwithstanding his choice of examples, that the absence of explicit quantifiers can lead to confusion. It all depends. On the other hand Alexander Solla is also on the money with his remark. A mathematician writes [1], "In particular, any given assertion in hard analysis usually comes with a number of unsightly quantifiers (For every there exists an N…) which can require some thought for a reader to parse." (3) Newspaper headline: Someone gets hit by a car every 6 seconds. A few months ago, a good chunk of Devlin's Intro to Math Thinking massive online course devoted itself to explicit and precise placement of quantifiers. So not only is the above headline judged improperly worded and hence badly ambiguous, but also commonplaces like "In case of fire do not use elevator." I'm a fervent believer against ambiguity: Let zealotry take its place. [1] http://terrytao.wordpress.com/2007/06/25/ultrafilters-nonstandard-analysis-a... -- Kim-Ee
participants (8)
-
Albert Y. C. Lai
-
Alexander Solla
-
Andres Löh
-
Johannes Waldmann
-
Kim-Ee Yeoh
-
Richard A. O'Keefe
-
Tillmann Rendel
-
Tom Ellis