Hi Kim-Ee,

So whereas the thing in question (whatever's denoted by 'a') is called a type 'variable', it doesn't 'vary' in (I'm guessing) the OO way.

When checking the types in your function, 'a' is fixed, i.e. made rigid, by what's known as a 'universal quantification'. So when you code up the type checking algorithm, you'd see a crystal-clear similarity to treating 'a' as if it were a monotype like Bool or String.

Exactly the point where I made the wrong turn in thinking. Unlike in OO, I think I can imagine that compiler creates new function for every 'a' type that is used, and check for that (mono)type. That's the difference in static type checking.

Overall, I think you're doing really well for someone groping with Haskell general and its type system in particular. The lingo can be misleading.

Just when I got the feeling I'm getting somewhere with my Haskell skills, you're telling me I'm groping. :-)
I think you're probably right. I was so amazed with the language that I went coding as soon as possible, and in the way missed some theoretical background.

Personally, I think a lot can be made to fill the gap between trial-and-discovery and reading notationally-heavy formal texts like conf papers and textbooks (Pierce's TAPL).

Indeed, and I have read a lot. But one first has to understand the problem, to understand the solution. Maybe it's time for a re-read.


vlatko