
Michael T. Richter wrote:
On Mon, 2007-28-05 at 10:35 +0100, Andrew Coppin wrote:
Most of the documents that describe these things begin with "suppose we have this extremely complicated and difficult to understand situation... now, we want to do X, but the type system won't let us." Which makes it seem like these extensions are only useful in extremely complicated and rare situations. The fact that my own programs hardly ever result in situations where I want to do X but the type system won't let me only reinforces this idea. Maybe it's just the kind of code I write...
I think this comes from the mathematical background of most people who write papers about Haskell, personally. Math is probably the least competently-taught subject on the planet. (History may share this distinction. May.) There's a lot of purely-hypothetical situations ("suppose you'd want to map an enumeration of the set of all blah blah blah blah to the set of blah blah blah") with no sense of motivation or application provided. As a result to many (obviously not all!) it looks more like random pen-scratchings with no practical use.
Which is a pity.
Because most of it *is* useful in practical ways. Even fields whose researchers took *pride* in being "absolutely useless" are beginning to show themselves as useful. But math being taught the way it is makes this connection inobvious (to put it mildly) and thus mathematics as a field gets none of the respect and interest it deserves.
Haskell's leading practitioners and cheerleaders tend to be mathematicians first. There are some who write papers accessible to the work-a-day programmer, but most do not and, as a result, it's often hard to decode motivation for the language's features and extensions.
Now that's the bad news. The good news is that this is changing. Five years ago, when I first looked at Haskell and gave up, there was almost *nothing* available to teach the language that wasn't purest ivory-tower hypothetical situations. The materials available detailing the language really made it look like you had to be a multiple-Ph.D. in maths and copyright law (that latter just because it's the only thing I can think of more complicated than maths ;)) to be smart enough and well-educated enough to use Haskell. This is no longer the case. Simon Peyton-Jones, Don Stewart and a handful of other luminaries in the community (whose names I've temporarily forgotten because I'm lousy with names) are beginning to produce good, high-quality papers on obscure and difficult topics that make these topics -- almost ordinary. Further, just in the last *year* the nature of conversations in haskell-cafe has changed dramatically. I'm seeing a lot more real-world, work-a-day programming questions and answers these days than I did as little as a year ago.
So it's not all hopeless, Andrew (thankfully for obvious dullards like me).
The laughabout thing is that mathematics is actually one of my main hobbies! (Hence my question a while back about implementing Mathematica in Haskell.) I've spent lots of time investigating subjects like group theory, differential and integral calculus, complex numbers and their properties, polynomials, cryptology, data compression, ray tracing, sound synthesis, digital signal processing, artificial intelligence, and all manner of other things. (Most normal humans think I'm a total nerd and want nothing to do with me.) I had little trouble learning Haskell. I had (and still have) lots of trouble figuring out the best way to *use* Haskell, but the language itself is delightfully simple, elegant and natural. I guess elegance appeals to the mathematition in me or something, I don't know. And yet, lots of stuff written about Haskell makes some pretty big assumptions about what you know. (E.g., assuming you know what "second-order logic" is.) Some of it doesn't - and that's the stuff I love reading. Let's take another look at The Fun of Programming. - Chapter 1 is a delightful exercise in binary heap trees. It demonstrates everything that's exciting about Haskell. A few dozen lines of code and we have a beautiful, elegant, simple, useful data structure. - Chapter 2 is... puzzling. Personally I've never seen the point of trying to check a program against a specification. If you find a mismatch then which thing is wrong - the program, or the spec? - Chapter 3 is very interesting... but a little hard going. ("Origami Programming". Did you know there are over 20 kinds of morphisms?) - Chapter 4 is very similar to something I read somewhere else before, so I skipped it. - Chapter 5 struck me as being just bizzare. "Hey, rather than write this efficient but complicated function, let's use an inefficient but elegant version of it and append several pages of magical compiler hints so it can transform it into the efficient version." Um... and this is saving you work how, exactly? - Chapter 6 is fascinating. A little confusing in places, but fascinating none the less. ("How to write a financial contract.") - Chapter 7 is dissapointing. Doesn't really say a whole lot. ("Functional Images") - Chapter 8 is once again captivating. ("Lava") - Chapter 9 I have already written about. ("Combinators for Logic Programming") - Chapter 10 is all about arrows. So it's mostly stuff I already know, but it's interesting none the less. - Chapter 11 is interesting, but not something I'm terribly concerned about. ("A prettier printer") - Chapter 12 is incomprehensible (to me at least). "Fun with Phantom Types" I've read it several times, and I still couldn't tell you what a phantom type is... I find this with Haskell books. There are some brilliant bits. There are some bits that are sort-of interesting but not really to do with anything I'm passionate about, and then there are bits that I can't comprehend...