Re: [Haskell-cafe] Channel9 Interview: Software Composability and theFu ture of Languages

On Fri, 26 Jan 2007 17:13:43 -0000 (GMT), you wrote:
world. It also highlights some of the misconceptions that still exist and need to be challenged, e.g. the idea that Haskell is too hard or is impractical for real work.
Haskell _is_ hard, although I don't think it's _too_ hard, or I wouldn't be here, obviously. Haskell is hard in the sense that in order to take advantage of its ability to better solve your problems, you have to THINK about your problems a lot more. Most people don't want to do that; they want the quick fix served up on a platter. And even the "intermediate" camp, the ones who are willing to invest some effort to learn a better way, are only willing to go so far. My analogy for this is the Sams PHOTOFACT series (If you're not old enough to already know what these are, visit http://www.samswebsite.com/photofacts.html). With an appropriate Sams PHOTOFACT in hand, and some very basic skills with a voltmeter and maybe an oscilloscope, you can diagnose and repair your TV with virtually no understanding of electronics at all. The audience for programming languages like Haskell is always going to be small, because it appeals to those who want to understand how the TV works, perhaps to the extent of being able to modify an existing TV or even design one from scratch. And those kind of people are much fewer and farther between than those who simply want to localize the problem enough to be able to unplug the malfunctioning part and plug in a new one. It makes sense to publicize Haskell; you can't take advantage of something you've never heard of. But I think evangelical effort is largely wasted. The people who are going to gravitate towards Haskell are the ones who are already searching for something better (they just aren't sure what it is). The rest aren't really interested, and if at some future point they become interested, they'll find the way on their own. Steve Schafer Fenestra Technologies Corp. http:/www.fenestra.com/

Steve Schafer wrote:
Neil Bartlett wrote:
It also highlights some of the misconceptions that still exist and need to be challenged, e.g. the idea that Haskell is too hard or is impractical for real work.
Haskell _is_ hard, although I don't think it's _too_ hard, or I wouldn't be here, obviously. Haskell is hard in the sense that in order to take advantage of its ability to better solve your problems, you have to THINK about your problems a lot more. Most people don't want to do that; they want the quick fix served up on a platter. And even the "intermediate" camp, the ones who are willing to invest some effort to learn a better way, are only willing to go so far.
I agree, but I think it should be pointed out that primarily it is not Haskell which is hard, it is Programming which is. Haskell only reflects this better than the mainstream imperative languages. The latter encourage and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them. Haskell (more specifically: its purely functional core) makes such operational reasoning comparatively hard(*). Instead it supports and greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people and thus appears 'difficult' and even 'impractical' -- it requires a mental shift to a more abstract understanding.(***) What the hard-core 'common sense' type can't see and won't accept is that denotational reasoning is strictly superior to operational reasoning (at least with regard to correctness), if only because the latter invariably fails to scale with the exploding number of possible system states and execution paths that have to be taken into account for any non-trivial program. As Dijkstra once said, the main intellectual challenge for computer science resp. programming is: how not to make a mess of it. Those who don't want to think will invariably shoot themselves in the foot, sooner or later. Their programs become a complex, unintelligible mess; the prefered choice of apparently 'easier' or 'more practical' programming languages often (but not always) reflects an inability or unwillingness to appreciate the intellectual effort required to avoid building such a mess. Cheers Ben (*) The downside of which is that it is also quite hard to reason about a program's efficiency in terms of memory and CPU usage. (**) At least the 'fast and loose' version of it, i.e. disregarding bottom and seq. (***) Many Haskell newcomers have described the feeling of being 'mentally rewired' while learning programming in Haskell. IMO this is a very good sign!

I thought it was very telling that, at the end of the interview, when
the interview asked, "In general, where is programming going?", the
responses were all things that haskell is good at. Shame it's such an
"impractical" language.
On 1/26/07, Benjamin Franksen
Steve Schafer wrote:
Neil Bartlett wrote:
It also highlights some of the misconceptions that still exist and need to be challenged, e.g. the idea that Haskell is too hard or is impractical for real work.
Haskell _is_ hard, although I don't think it's _too_ hard, or I wouldn't be here, obviously. Haskell is hard in the sense that in order to take advantage of its ability to better solve your problems, you have to THINK about your problems a lot more. Most people don't want to do that; they want the quick fix served up on a platter. And even the "intermediate" camp, the ones who are willing to invest some effort to learn a better way, are only willing to go so far.
I agree, but I think it should be pointed out that primarily it is not Haskell which is hard, it is Programming which is. Haskell only reflects this better than the mainstream imperative languages. The latter encourage and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them.
Haskell (more specifically: its purely functional core) makes such operational reasoning comparatively hard(*). Instead it supports and greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people and thus appears 'difficult' and even 'impractical' -- it requires a mental shift to a more abstract understanding.(***)
What the hard-core 'common sense' type can't see and won't accept is that denotational reasoning is strictly superior to operational reasoning (at least with regard to correctness), if only because the latter invariably fails to scale with the exploding number of possible system states and execution paths that have to be taken into account for any non-trivial program.
As Dijkstra once said, the main intellectual challenge for computer science resp. programming is: how not to make a mess of it.
Those who don't want to think will invariably shoot themselves in the foot, sooner or later. Their programs become a complex, unintelligible mess; the prefered choice of apparently 'easier' or 'more practical' programming languages often (but not always) reflects an inability or unwillingness to appreciate the intellectual effort required to avoid building such a mess.
Cheers Ben (*) The downside of which is that it is also quite hard to reason about a program's efficiency in terms of memory and CPU usage. (**) At least the 'fast and loose' version of it, i.e. disregarding bottom and seq. (***) Many Haskell newcomers have described the feeling of being 'mentally rewired' while learning programming in Haskell. IMO this is a very good sign!
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Fri, 26 Jan 2007 22:00:11 +0100, you wrote:
I agree, but I think it should be pointed out that primarily it is not Haskell which is hard, it is Programming which is. Haskell only reflects this better than the mainstream imperative languages.
That's very true. Writing good software is hard, regardless of the choice of language. Perhaps what sets Haskell apart is that, unlike most languages, it also makes writing _bad_ software hard. ;) Steve Schafer Fenestra Technologies Corp. http://www.fenestra.com/

Hello Benjamin, Saturday, January 27, 2007, 12:00:11 AM, you wrote:
and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them.
greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people
i think you are completely wrong! FP way is to represent everything as function, imperative way is to represent everything as algorithm. there is no "natural thinking way", the only think that matters is *when* student learned the appropriate concept. let's look - in my college, the notion of algorithm was introduced at the 1st course *before* any other programming courses. we even studied Turing machine as an abstract algorithm executor just imagine that we will learn Church's machine instead! and even this don't required - notion of function is scholar's one! i'm pretty sure that Haskell may be learned at 1st school class, together with mathematics itself, and used as natural way to write expressions to evaluate all the problem of learning FP paradigm for college-finished programmers is just that their brains are filled with imperative paradigm and they can't imagine non-imperative programming. it was hard for me, too :) we should change college programs to make FP programming available for the masses -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Wed, Jan 31, 2007 at 02:40:23 +0300, Bulat Ziganshin wrote:
Hello Benjamin,
Saturday, January 27, 2007, 12:00:11 AM, you wrote:
and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them.
greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people
i think you are completely wrong! FP way is to represent everything as function, imperative way is to represent everything as algorithm. there is no "natural thinking way", the only think that matters is *when* student learned the appropriate concept. let's look - in my college, the notion of algorithm was introduced at the 1st course *before* any other programming courses. we even studied Turing machine as an abstract algorithm executor
Nneither way may be "natural", but imperative thinking is extremely common in society, I'd say much more than "functional" thinking. Just think of cooking recipes, IKEA instructions, all the algorithms tought in math classes in grade school. I doubt that we'll ever see functional thinking tought alongside imperative thinking in lower grades in school. It could even be that the development of the human brain as we grow up reaches a state where it can handle imperative thinking before it can handle functional thinking (I bet there's a reason why astronauts have step-wise instructions rather than a set of "functions"). All I'm trying to say is that imperative thinking is so common outside of CS/math and we learn it so early on in life that we probably can consider it the "natural thinking way". /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus.therning@gmail.com http://therning.org/magnus

magnus:
All I'm trying to say is that imperative thinking is so common outside of CS/math and we learn it so early on in life that we probably can consider it the "natural thinking way".
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes] :)

On Wed, 2007-01-31 at 19:51 +1100, Donald Bruce Stewart wrote: . . .
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes]
Nice example. First, note that you can't get close with map -- you need accumulation across the dishes. Second, the correctness of this plan depends on the rather strong frame axiom that no entity in the environment is changed during a step of the fold, so no broken dishes. Finally, that doesn't work so well when there are constraints on the order that the dishes are washed, for example when washing the cruddiest dishes first while there are more suds. -- Bill Wood

Quoth Bill Wood, nevermore,
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes]
Finally, that doesn't work so well when there are constraints on the order that the dishes are washed, for example when washing the cruddiest dishes first while there are more suds.
Ah, but that's why one runs the list 'dishes' through the function 'quickndirtysort' before beginning. It's like quicksort but orders by cruddiness :) Alas, that still doesn't capture the notion of soapywater degenerating into murkywater. cheers, Dougal.

On Jan 31, 2007, at 6:10 , Dougal Stanton wrote:
Quoth Bill Wood, nevermore,
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes]
Finally, that doesn't work so well when there are constraints on the order that the dishes are washed, for example when washing the cruddiest dishes first while there are more suds.
Alas, that still doesn't capture the notion of soapywater degenerating into murkywater.
Only because the definition of wash hasn't been provided; obviously, it returns a new value of water which is less soapy than the initializer soapywater. :) -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On 1/31/07, Bill Wood
On Wed, 2007-01-31 at 19:51 +1100, Donald Bruce Stewart wrote: . . .
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes]
Nice example. First, note that you can't get close with map -- you need accumulation across the dishes. Second, the correctness of this plan depends on the rather strong frame axiom that no entity in the environment is changed during a step of the fold, so no broken dishes. Finally, that doesn't work so well when there are constraints on the order that the dishes are washed, for example when washing the cruddiest dishes first while there are more suds.
It also assumes that there's necessarily a natural decomposition on the dishes, and if you think there is, you haven't seen my kitchen! -- Kirsten Chevalier* chevalier@alum.wellesley.edu *Often in error, never in doubt "Sleeping is a gateway drug / to being awake again" -- They Might Be Giants

Kirsten Chevalier wrote:
On Wed, 2007-01-31 at 19:51 +1100, Donald Bruce Stewart wrote: . . .
foldl (\water dish -> wash water dish) soapywater dishes ::
[Dishes]
Nice example. First, note that you can't get close with map -- you need accumulation across the dishes. [...] It also assumes that there's necessarily a natural decomposition on
On 1/31/07, Bill Wood
wrote: the dishes, and if you think there is, you haven't seen my kitchen!
Using a lazy fold may not be the best choice. Although it may sound enticing to delay washing until a clean dish is actually required (and having the dirty dishes removed by the garbage collector, hopefully before they start naturally decomposing), you will quickly run out of stack space. -k

Ketil Malde wrote:
Using a lazy fold may not be the best choice. Although it may sound enticing to delay washing until a clean dish is actually required (and having the dirty dishes removed by the garbage collector, hopefully before they start naturally decomposing), you will quickly run out of stack space.
Sounds like an experience I had with one of my roommates....

On 1/31/07, Kirsten Chevalier
On 1/31/07, Bill Wood
wrote: On Wed, 2007-01-31 at 19:51 +1100, Donald Bruce Stewart wrote: . . .
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes]
Nice example. First, note that you can't get close with map -- you need accumulation across the dishes. Second, the correctness of this plan depends on the rather strong frame axiom that no entity in the environment is changed during a step of the fold, so no broken dishes. Finally, that doesn't work so well when there are constraints on the order that the dishes are washed, for example when washing the cruddiest dishes first while there are more suds.
It also assumes that there's necessarily a natural decomposition on the dishes, and if you think there is, you haven't seen my kitchen!
In my kitchen, there is a natural decomposition on the dishes. Especially on the ones that have been at the bottom of the pile for the longest time. -Yitz

Donald Bruce Stewart wrote:
magnus:
All I'm trying to say is that imperative thinking is so common outside of CS/math and we learn it so early on in life that we probably can consider it the "natural thinking way".
foldl (\water dish -> wash water dish) soapywater dishes :: [Dishes] ^^ type error
I hope they weren't expensive dishes because as each dish is washed it dissolves into the water never to be seen again! :-) Brian. -- http://www.metamilk.com

On Wed, 2007-01-31 at 08:45 +0000, Magnus Therning wrote: . . .
Nneither way may be "natural", but imperative thinking is extremely common in society, I'd say much more than "functional" thinking. Just think of cooking recipes, IKEA instructions, all the algorithms tought in math classes in grade school. I doubt that we'll ever see functional thinking tought alongside imperative thinking in lower grades in school.
It could even be that the development of the human brain as we grow up reaches a state where it can handle imperative thinking before it can handle functional thinking (I bet there's a reason why astronauts have step-wise instructions rather than a set of "functions").
All I'm trying to say is that imperative thinking is so common outside of CS/math and we learn it so early on in life that we probably can consider it the "natural thinking way".
I think you're quite right here. There's a reason that so much of Artificial Intelligence research dealt with Planning, Procedural Knowledge, and Temporal Reasoning. The so-called "Frame Problem" would not have been such a research topic if it weren't for the importance that many researchers placed on reasoning about state change. On the other hand work on declarative knowledge focused on taxonomic hierarchies and semantic nets, which dealt mostly with structural knowledge. A common view was that the two sorts of knowledge complemented each other, and corresponded in a natural way to intuitions (and some epistemological theory) about human reasoning. -- Bill Wood

Hello Magnus, Wednesday, January 31, 2007, 11:45:53 AM, you wrote:
in math classes in grade school. I doubt that we'll ever see functional thinking tought alongside imperative thinking in lower grades in school.
c=a*b e=c+d f=e*2 are you learned to write things in this way? in *my* school, we have studied f=(a*b+d)*2 way :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Quoth Bulat Ziganshin, nevermore,
are you learned to write things in this way? in *my* school, we have studied
f=(a*b+d)*2
Also there's the issue of variable substitution. It gets taught very early on when algebra is introduced (at least in UK): f = x + 3y and y = sqrt(4 + n) => f = x + 3(sqrt(4 + n)) (I can't think of a real example at the moment, sorry!) Unless the calculation for y is extremely arduous you're always taught to just throw them all together. This really gets across the "equality by definition" thing that pure FP uses. There's no need to do things in any particular order. This is pretty close to how 'straight' function application works; I don't think we ever did anything like higher order functions in high-school algebra. Maybe that's the next step --- lambda calculus at age ten and up! :) Cheers, Dougal.

On Wed, Jan 31, 2007 at 14:05:10 +0300, Bulat Ziganshin wrote:
Hello Magnus,
Wednesday, January 31, 2007, 11:45:53 AM, you wrote:
in math classes in grade school. I doubt that we'll ever see functional thinking tought alongside imperative thinking in lower grades in school.
c=a*b e=c+d f=e*2
are you learned to write things in this way? in *my* school, we have studied
f=(a*b+d)*2
way :)
Hmm, are you saying that you learnt to do algebra _before_ using plain numbers? All I'm trying to get across is that there's probably a good reason why we tech kids that to calculate (4*3+2)*2 you first calculate 4*3, then add 2, then multiply the lot by 2. Each step would be written to paper until the kid learns to do calculations like this "in RAM". I'm suspecting this manner of teaching has foundation in pedagogy. /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus.therning@gmail.com http://therning.org/magnus

Bulat Ziganshin wrote:
FP way is to represent everything as function, imperative way is to represent everything as algorithm.
Magnus Therning wrote:
Neither way may be "natural", but imperative thinking is extremely common in society, I'd say much more than "functional" thinking. Just think of cooking recipes, IKEA instructions, all the algorithms taught in math classes in grade school.
Those are not imperative thinking - they are sequential thinking. There is nothing non-functional about describing an algorithm in sequential form. Imperative means requiring that the steps always be followed exactly. People do not follow the steps of a recipe exactly - they just use the recipe to understand what needs to be prepared. Same with your other examples. That's why I think that the oposite is true - declarative style is more natural. Just describe what you want to be computed. If that is best described as a sequence of steps, fine, if not, not. But in either case, you are not forcing a CPU to follow the steps blindly. -Yitz

On Wed, Jan 31, 2007 at 13:36:02 +0200, Yitzchak Gale wrote:
Bulat Ziganshin wrote:
FP way is to represent everything as function, imperative way is to represent everything as algorithm.
Magnus Therning wrote:
Neither way may be "natural", but imperative thinking is extremely common in society, I'd say much more than "functional" thinking. Just think of cooking recipes, IKEA instructions, all the algorithms taught in math classes in grade school.
Those are not imperative thinking - they are sequential thinking. There is nothing non-functional about describing an algorithm in sequential form.
Sequential thinking would be related to procedural programming, that is ordering of statements are important but there's no state. Functional programming is declarative, no order and no state. So, to be strict I'd say that sequential form _is_ non-functional. At least if FOLDOC is correct and I read and understood things properly. I'm assuming that imperative implies state _and_ order.
Imperative means requiring that the steps always be followed exactly. People do not follow the steps of a recipe exactly - they just use the recipe to understand what needs to be prepared. Same with your other examples.
When cooking we have a state--the kitchen/workbench/dish. We have a sequence of steps--the recipe. Yes, it's possible to re-arrange that sequence, cooks do that all the time, based on experience. But that's just optimisation. The optimisation done by a C-compiler doesn't change the nature of the language from imperative to procedural or declarative, does it?
That's why I think that the oposite is true - declarative style is more natural. Just describe what you want to be computed. If that is best described as a sequence of steps, fine, if not, not. But in either case, you are not forcing a CPU to follow the steps blindly.
The world has state! Just see what a "stink" that has created in the pure functional language camp! Your arguments have actually strenthened my conviction that we as humans find imperative to be more "natural" than both sequential and declarative. :-) Just too bad that in this case "natural" doesn't correspond with "best" :-( /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus.therning@gmail.com http://therning.org/magnus

Magnus Therning
Sequential thinking would be related to procedural programming, that is ordering of statements are important but there's no state. Functional programming is declarative, no order and no state. So, to be strict I'd say that sequential form _is_ non-functional. At least if FOLDOC is correct and I read and understood things properly. (snip)
You've completely lost me here. Order is /very/ important in functional programming. Consider function composition: Prelude> ((+1) . (*2)) 5 11 Prelude> ((*2) . (+1)) 5 12 There we have sequencing, and the computation has intermediate state. There's nothing non-functional about the above. (snip)
The world has state! Just see what a "stink" that has created in the pure functional language camp! (snip)
Mmmm. Monads deal with that very nicely, I think, but there's a way to go yet. -- Mark

Hello Mark, Wednesday, January 31, 2007, 5:43:36 PM, you wrote:
You've completely lost me here. Order is /very/ important in functional programming. Consider function composition:
Prelude>> ((+1) . (*2)) 5
11 Prelude>> ((*2) . (+1)) 5 12
*evaluation* order isn't defined here. compiler can first compute "(*2) . (+1)" or it can start with calculation of "(+1) 5" -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

If you ask an "ordinary" programmer what is the effect of an assignment like: x := <expr> you will get an answer along the lines: the <expr> is evaluated and the result is stored in a box named "x". If you ask him/her whether something else has happened the answer will usually be "no"; and this is the wrong answer, since one tends to overlook the fact that one has, by mentioning the variable "x", also indicated that the old value of "x" is no longer needed. As such imperative programming is "functional programming with explicit garbage collection". This may have been a necessity in the old days, when memory was scarce and machines slow (in my first course on numeric algorithms we were taught to declare the variables x, y, z, n, m, i, and j, and the array a, and to solve our problems with those only; points would be deducted for making use of more variables, or not reusing whenever possible!). Nowadays I think everyone agrees that automatic garbage collection is a good thing; think about the amount of wasted programmer years of people figuring out where they have space leak, etc., and users of programs getting a disappointing feeling about computer science as a whole since apparently we keep delivering programs that stop to work at unpredictable moments. So, one might wonder whether assignments are always bad, bringing us back to the OO discussion. Of course if we have the following expression: digest.chew.eat.serve.cook.chop.pluck.kill $ chicken we all have a definite feeling that after applying the functions, the original object is no longer available, and the FP view does not feel entirely natural. Doaitse Swierstra

Quoth Doaitse Swierstra, nevermore,
digest.chew.eat.serve.cook.chop.pluck.kill $ chicken
we all have a definite feeling that after applying the functions, the original object is no longer available, and the FP view does not feel entirely natural.
Yes, that is true. I can sort of see why people might object to the IO monad in this case, since it explicitly prevents the programmer from reusing old data. Of course, one might argue that it's not really the *monad* that prevents it, but the construction of the physical world. The monad merely prevents us from trying to eat our cake and have it. Is there a computing-realm equivalent of your chicken example? For example, an IO-monadic "removeFile" will obliterate the chicken, but if we load the chicken into memory first we can recreate the same chicken later... I think this is why analogies never stretch very far in computing. ;-) Cheers, Dougal.

Reminds me of an HWN quote from a few weeks back.
Jan 9, 2007 HWN
"*Paul Johnson*: Mutable state is actually another form of manual memory
management: every time you over-write a value you are making a decision that
the old value is now garbage, regardless of what other part of the program
might have been using it."
On 1/31/07, Doaitse Swierstra
If you ask an "ordinary" programmer what is the effect of an assignment like:
x := <expr>
you will get an answer along the lines: the <expr> is evaluated and the result is stored in a box named "x".
If you ask him/her whether something else has happened the answer will usually be "no"; and this is the wrong answer, since one tends to overlook the fact that one has, by mentioning the variable "x", also indicated that the old value of "x" is no longer needed. As such imperative programming is "functional programming with explicit garbage collection".
This may have been a necessity in the old days, when memory was scarce and machines slow (in my first course on numeric algorithms we were taught to declare the variables x, y, z, n, m, i, and j, and the array a, and to solve our problems with those only; points would be deducted for making use of more variables, or not reusing whenever possible!). Nowadays I think everyone agrees that automatic garbage collection is a good thing; think about the amount of wasted programmer years of people figuring out where they have space leak, etc., and users of programs getting a disappointing feeling about computer science as a whole since apparently we keep delivering programs that stop to work at unpredictable moments.
So, one might wonder whether assignments are always bad, bringing us back to the OO discussion. Of course if we have the following expression:
digest.chew.eat.serve.cook.chop.pluck.kill $ chicken
we all have a definite feeling that after applying the functions, the original object is no longer available, and the FP view does not feel entirely natural.
Doaitse Swierstra
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Wed, Jan 31, 2007 at 09:43:36 -0500, Mark T.B. Carroll wrote:
Magnus Therning
writes: (snip) Sequential thinking would be related to procedural programming, that is ordering of statements are important but there's no state. Functional programming is declarative, no order and no state. So, to be strict I'd say that sequential form _is_ non-functional. At least if FOLDOC is correct and I read and understood things properly. (snip)
You've completely lost me here. Order is /very/ important in functional programming. Consider function composition:
Prelude> ((+1) . (*2)) 5 11 Prelude> ((*2) . (+1)) 5 12
There we have sequencing, and the computation has intermediate state. There's nothing non-functional about the above.
What you have written above are two mathematical expressions: 2 * 5 + 1 (1 + 5) * 2 Now, correct me if I'm wrong, but you see a "natural" ordering in your haskell expressions, right? However, I don't think there's any pre-defined ordering of the _calculations_ the computer performs in your _equations_ above. In the first one, there isn't much freedom at all due to mathematical rules, however in the second one we have a few choices: - distribute *2 over the parenthesis - calculate the left product, calculate the right, sum it up - calculate the right product, calculate the left, sum it up - compute the sum in the parenthesis, multiply the two factors No matter which is chosen we end up with the same result. So, in the end we don't care about the sequence. However the compiler/interpreter has to choose a sequence in order to arrive at a result, since that's how today's computers work. (Choosing well can be seen as optimisation :-) I know the example here is a bit contrived, but I didn't want to move away from your example. In short, AFAIU: declarative programming: I don't care how you do it, just get it done! sequential programming: You'd better do things in this order! imperative programming: You should do it exactly like this! Oh, if I've fundamentally misunderstood things then I'd love to hear it :-) /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus.therning@gmail.com http://therning.org/magnus

Magnus Therning
Now, correct me if I'm wrong, but you see a "natural" ordering in your haskell expressions, right?
Right.
However the compiler/interpreter has to choose a sequence in order to arrive at a result, since that's how today's computers work. (Choosing well can be seen as optimisation :-)
Right. Ah, I may see my confusion then - I do see a natural ordering in my Haskell expressions, and indeed evaluation order could be different so long as the end result is the same, but I don't find that different to imperative languages - I would expect modern compilers to reorder statements if they can get better results out of it without affecting the answer. (E.g., if I specify a set of assignments where some reordering leaves the answers unchanged, but clusters uses of values better so that the dataflow is such that they can stay in registers, I'd expect the compiler to go ahead - so even where I do specify evaluation order in C or whatever, I don't particularly expect the compiler to respect that in cases where I won't notice that it didn't.) -- Mark

Hello Mark, Wednesday, January 31, 2007, 7:30:33 PM, you wrote:
Ah, I may see my confusion then - I do see a natural ordering in my Haskell expressions, and indeed evaluation order could be different so long as the end result is the same, but I don't find that different to imperative languages - I would expect modern compilers to reorder statements
you said :) they *reorder* statements while we don't have any order at all :) (at least on kitchen ;) practice shows that there is big difference between chunks that's "reordered". pure code means that any computations may be reordered, so we have much bigger chunks and don't need to play with compiler "safe optimization" options. higher-order functions, lazy evaluation, lazy functional datastructures and special GC adds more to this. just try to rewrite simplest quicksort function in C* and look how much it will be reordered :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Imperative thinking is harder for humans than functional thinking. That is why astronauts need lists of instructions. When they use their natural intuition to solve problems, they are thinking functionally (and don't need a list to do it). Babies learn functional intuition immediately (pattern matching, lazy evaluation) to solve problems. Navigating around objects is hard for a robot but very basic for a baby. Following a prescribed path is basic for a robot but difficult for a child. Our brain is goal-oriented, not process-oriented. And BTW, a recipe book is a functional, not imperative, program. It is filled with recipes to be evaluated lazily. When it says, "make a white sauce, then chop onions and add to sauce", it means "you need a white sauce but I won't tell you how to make it. Look in the index if you need help (otherwise do it the way you already know how). And if you happen to have onions prechopped (or maybe onion flakes in the spice rack), don't ignore them and run to the store just because I told you to, just use what you have." The wording is imperative because schooling has distorted our natural functional/relational mode of thinking and devalued it. I for one think that turning the massively parallel, greedily optimizing, lazily evaluating, functional computer/relational database that is our brain into a von Neumann drone is a rather feeble accomplishment by any standard. Dan Weston Magnus Therning wrote:
Nneither way may be "natural", but imperative thinking is extremely common in society, I'd say much more than "functional" thinking. Just think of cooking recipes, IKEA instructions, all the algorithms tought in math classes in grade school. I doubt that we'll ever see functional thinking tought alongside imperative thinking in lower grades in school.
It could even be that the development of the human brain as we grow up reaches a state where it can handle imperative thinking before it can handle functional thinking (I bet there's a reason why astronauts have step-wise instructions rather than a set of "functions").
All I'm trying to say is that imperative thinking is so common outside of CS/math and we learn it so early on in life that we probably can consider it the "natural thinking way".

On Wed, Jan 31, 2007 at 14:28:00 -0800, Dan Weston wrote:
Imperative thinking is harder for humans than functional thinking. That is why astronauts need lists of instructions. When they use their natural intuition to solve problems, they are thinking functionally (and don't need a list to do it).
Really? May I ask what you base your reasoning on? I don't base my reasoning on any "hard facts" at all, but just what I consider to be reasonable. E.g. I believe that thinking clearly in stressful situations is difficult. I draw a line, perhaps wrongly, between space travel being on the extreme when it comes to stressful situations and NASA using lists of instructions. I think that suggests that when a human finds herself with diminished brain activity it's simply easier to follow a list of instructions. This lead me to think that our ability to make up lists of instructions develops first as we grow up.
Babies learn functional intuition immediately (pattern matching, lazy evaluation) to solve problems. Navigating around objects is hard for a robot but very basic for a baby. Following a prescribed path is basic for a robot but difficult for a child. Our brain is goal-oriented, not process-oriented.
AFAIU the human brain is amazing at processing information, especially at throwing away unimportant information. Couldn't the robot's difficulty in moving around objects simply be a result of our inability to mimick the brain's information processing?
And BTW, a recipe book is a functional, not imperative, program. It is filled with recipes to be evaluated lazily. When it says, "make a white sauce, then chop onions and add to sauce", it means "you need a white sauce but I won't tell you how to make it. Look in the index if you need help (otherwise do it the way you already know how). And if you happen to have onions prechopped (or maybe onion flakes in the spice rack), don't ignore them and run to the store just because I told you to, just use what you have."
Still not functional, sequential maybe, but not functional :-) All you've pointed out is that a recipe can have calls to sub-routines (make white sauce), and that we can use lookup tables to find sub-routines. You've also pointed out that we can do _some_ optimisations in the sequencing (e.g. pre-chopping onions), but the sequence is clearly there in the recipe, note your use of the word "then"? I'm not sure how a "functional" recipe would look, maybe something like this: White_sauce is a combination of ... . Chopped_onions is onions cut into small pieces. White_sauce_with_chopped_onions is the combination of white_sauce and chopped_onions.
The wording is imperative because schooling has distorted our natural functional/relational mode of thinking and devalued it. I for one think that turning the massively parallel, greedily optimizing, lazily evaluating, functional computer/relational database that is our brain into a von Neumann drone is a rather feeble accomplishment by any standard.
Again, I'm not convinced. I continue to think that _both_ ways are learnt, but that our brain reaches a level where it can handle imperative thinking before the level where it can handle functional thinking. Again, no "hard facts", just my imperfect observations and imperfect reasoning. /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus.therning@gmail.com http://therning.org/magnus

Quoth Magnus Therning, nevermore,
I'm not sure how a "functional" recipe would look, maybe something like this:
White_sauce is a combination of ... .
Chopped_onions is onions cut into small pieces.
White_sauce_with_chopped_onions is the combination of white_sauce and chopped_onions.
The functional approach
whitesauce = foldl stir base milks where base = flour + (heat butter)
compared with the imperative
whitesauce base = flour + heat(butter); while (milks > 0) stir(base, milk); milks--;
I'm going to go out on a limb here and suggest that, like Feynman's example of how people count in their heads, both of these explanations are accurate. If I were to explain the process to someone it would be in the imperative style: literally "giving commands", which is what a recipe is. But in my mind I imagine it as the gradual process of stirring milk into a base, which is far more adequately described in the functional style. The question is --- how would an expert describe such a process? Would a professional chef give instructions in the functional or imperative style? I think that is relevant, since the approach to the problem may change depending on proficiency. We may *learn* in the imperative style but *think* in the functional. Cheers, Dougal.

The question is --- how would an expert describe such a process? Would a professional chef give instructions in the functional or imperative style?
I think a sufficiently expert chef would not even need the functional style. Everything would be declarative. Dave Thomas (of "Pragmatic Programmers" fame) tells of finding his late grandmother's recipe cards, which she accumulated over her entire life. He was able to track their evolution from an extremely pedantic, imperative style, through to the almost Zen-like cards that read: "Spice cake: like chocolate cake. No chocolate, add spice".

neil:
The question is --- how would an expert describe such a process? Would a professional chef give instructions in the functional or imperative style?
I think a sufficiently expert chef would not even need the functional style. Everything would be declarative.
Dave Thomas (of "Pragmatic Programmers" fame) tells of finding his late grandmother's recipe cards, which she accumulated over her entire life. He was able to track their evolution from an extremely pedantic, imperative style, through to the almost Zen-like cards that read:
"Spice cake: like chocolate cake. No chocolate, add spice".
Surely this is the arrow or monad transformer of recipe abstractions! Entirely new functionality, and such information density, on a single line. -- Don

On Thu, Feb 01, 2007 at 09:24:03 +0000, Dougal Stanton wrote: [..]
I'm going to go out on a limb here and suggest that, like Feynman's example of how people count in their heads, both of these explanations are accurate. If I were to explain the process to someone it would be in the imperative style: literally "giving commands", which is what a recipe is. But in my mind I imagine it as the gradual process of stirring milk into a base, which is far more adequately described in the functional style.
The question is --- how would an expert describe such a process? Would a professional chef give instructions in the functional or imperative style? I think that is relevant, since the approach to the problem may change depending on proficiency. We may *learn* in the imperative style but *think* in the functional.
There may be something in what you say. I kind of like it :-) Of course it makes me wonder if we have to _learn_ how to _learn_ in the functional style. I am convinced _some_ people are able to learn in that way, I doubt all are. /M -- Magnus Therning (OpenPGP: 0xAB4DFBA4) magnus@therning.org Jabber: magnus.therning@gmail.com http://therning.org/magnus

[sorry, this was meant to go to the list] On Wednesday 31 January 2007 00:40, Bulat Ziganshin wrote:
Saturday, January 27, 2007, 12:00:11 AM, you wrote:
and support operational reasoning, i.e. creating and understanding programs by mentally modeling their execution on a machine. This form of reasoning appeals to 'common sense', it is familiar to almost all (even completely un-educated) people and is therefore easier acessible to them.
greatly simplifies denotional resp. equational reasoning(**), i.e. to understand a program as an abstract formula with certain logical properties; an abstract entity in its own right, independent of the possibility of execution on a machine. This way of thinking is less familiar to most people
i think you are completely wrong! FP way is to represent everything as function, imperative way is to represent everything as algorithm. there is no "natural thinking way", the only think that matters is *when* student learned the appropriate concept.
What I meant is that it is more similar to the way we use to think in our daily life. Noone thinks about day-to-day practical problems in a formal way -- in most cases this would be a completely inappropriate approach. Formal thinking comes naturally to only a gifted few of us, most find it exceptionally hard to learn. However, I didn't mean to say that formal reasoning is something that cannot be learned. I would even say that it is easier to learn if done right from the start as something completely new instead of appealing to intuition, thus trying to connect it to the more 'natural' ways of thinking -- because the latter have to be 'unlearned' to a certain extent before the new way of thinking can take hold.
all the problem of learning FP paradigm for college-finished programmers is just that their brains are filled with imperative paradigm and they can't imagine non-imperative programming. it was hard for me, too :)
we should change college programs to make FP programming available for the masses
Absolutely. Cheers Ben -- "Programming = Mathematics + Murphy's Law" (Dijkstra in EWD1008)

On 1/26/07, Steve Schafer
On Fri, 26 Jan 2007 17:13:43 -0000 (GMT), you wrote:
world. It also highlights some of the misconceptions that still exist and need to be challenged, e.g. the idea that Haskell is too hard or is impractical for real work.
Haskell _is_ hard, although I don't think it's _too_ hard, or I wouldn't be here, obviously. Haskell is hard in the sense that in order to take advantage of its ability to better solve your problems, you have to THINK about your problems a lot more. Most people don't want to do that; they want the quick fix served up on a platter. And even the "intermediate" camp, the ones who are willing to invest some effort to learn a better way, are only willing to go so far.
My analogy for this is the Sams PHOTOFACT series (If you're not old enough to already know what these are, visit http://www.samswebsite.com/photofacts.html). With an appropriate Sams PHOTOFACT in hand, and some very basic skills with a voltmeter and maybe an oscilloscope, you can diagnose and repair your TV with virtually no understanding of electronics at all.
The audience for programming languages like Haskell is always going to be small, because it appeals to those who want to understand how the TV works, perhaps to the extent of being able to modify an existing TV or even design one from scratch. And those kind of people are much fewer and farther between than those who simply want to localize the problem enough to be able to unplug the malfunctioning part and plug in a new one.
It makes sense to publicize Haskell; you can't take advantage of something you've never heard of. But I think evangelical effort is largely wasted. The people who are going to gravitate towards Haskell are the ones who are already searching for something better (they just aren't sure what it is). The rest aren't really interested, and if at some future point they become interested, they'll find the way on their own.
You have a PhD in computer science from Princeton, so your measure of what's "hard" and what isn't in this regard is nearly worthless. I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done. Maybe they want a solution that meets them half-way, one that doesn't require that they understand how to build their own resistors and capacitors in order to make their TV work again (to use your analogy). That's what Meijer means when he says that Haskell is too hard. Collin Winter

On 1/26/07, Collin Winter
You have a PhD in computer science from Princeton, so your measure of what's "hard" and what isn't in this regard is nearly worthless.
I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done.
I've never taken a graduate-level class in category theory, or any course on category theory, and I'm a Haskell implementor. So perhaps the people who think they need to taken graduate-level classes in category theory in order to use Haskell are barking up the wrong tree (or perhaps I'm not a very good Haskell implementor, which is always possible.)
Maybe they want a solution that meets them half-way, one that doesn't require that they understand how to build their own resistors and capacitors in order to make their TV work again (to use your analogy). That's what Meijer means when he says that Haskell is too hard.
On the other hand, Meijer also has a PhD in computer science... is his judgment on Haskell's difficulty or lack thereof worthless, too? If not, then surely, judgments about whether Haskell is too hard can't have much to do with who has a PhD and who doesn't. Cheers, Kirsten -- Kirsten Chevalier* chevalier@alum.wellesley.edu *Often in error, never in doubt "Would you be my clock if I promise not to hang you / Too close to the window or the picture of the pope? / I won't set you back and I won't push you forward / I just want to look in your face and see hope" -- Dom Leone

catamorphism:
On 1/26/07, Collin Winter
wrote: You have a PhD in computer science from Princeton, so your measure of what's "hard" and what isn't in this regard is nearly worthless.
I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done.
I've never taken a graduate-level class in category theory, or any course on category theory, and I'm a Haskell implementor. So perhaps the people who think they need to taken graduate-level classes in category theory in order to use Haskell are barking up the wrong tree (or perhaps I'm not a very good Haskell implementor, which is always possible.)
I haven't done any graduate level category theory either, and I hack Haskell 24/7! Let's form a union!! -- Don

Hello Donald, Saturday, January 27, 2007, 10:18:44 AM, you wrote:
I've never taken a graduate-level class in category theory, or any course on category theory, and I'm a Haskell implementor. So perhaps
I haven't done any graduate level category theory either, and I hack Haskell 24/7! Let's form a union!!
yes! we hate category theory!!! :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hello Kirsten, Saturday, January 27, 2007, 10:05:15 AM, you wrote:
On the other hand, Meijer also has a PhD in computer science... is his judgment on Haskell's difficulty or lack thereof worthless, too? If not, then surely, judgments about whether Haskell is too hard can't have much to do with who has a PhD and who doesn't.
let's count that he is implementor of another language ;) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Fri, 26 Jan 2007 22:01:09 -0600, you wrote:
You have a PhD in computer science from Princeton, so your measure of what's "hard" and what isn't in this regard is nearly worthless.
I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done. Maybe they want a solution that meets them half-way, one that doesn't require that they understand how to build their own resistors and capacitors in order to make their TV work again (to use your analogy). That's what Meijer means when he says that Haskell is too hard.
I never said that people are lazy, only that they're not interested--they'd rather devote their time to other pursuits. That's not meant to be a criticism at all, it's an observation of The Way Things Work. And I certainly wasn't referring to the people who actually _complain_ about Haskell being too hard. After all, they're the ones who are in fact trying to learn, not the ones who simply don't care to. My comment was directed specifically at the notion that we can make Haskell more popular by somehow making it easier. _That's_ the argument that I believe is flawed, because the things that give value to Haskell are precisely the things that require effort to understand. To paraphrase Albert Einstein: make it as simple as possible, but no simpler. This is not just a progrmaming issue; I encounter the same phenomenon pretty much everywhere: I'm currently trying to build a house, and I've found that most of the people who are in the business of building houses don't want to try anything new, even if it might be a better way that has the potential to save them money and/or result in a better end product. They want to continue building houses the way they've always built houses. The fraction of house builders who do want to learn new and possibly better ways of building is, and always will be, a small one. Again, it's not a criticism, just an observation, something that one needs to be aware of when embarking on a house-building project. I'm reminded of when Java first appeared on the scene a little over ten years ago. There was a huge upswell in interest; all of the web developers out there were going to add Java applets to their web sites. Then the awful truth came out: Oops, you had to learn how to _program_ to create applets. Never mind; not worth it. And applets quickly and quietly faded away. By the way, to set the record straight, while I do have a Ph.D. from Princeton, it's in semiconductor physics, not computer science. And lest anyone were to misunderstand when I say Haskell is hard, while I do mean that it's hard for _me_, I think the evidence is pretty strong that I'm far from alone in my assessment. Whether or not it's hard for any particular person is up to that individual to judge, of course. I stick with Haskell because (a) I think it just might be worth it, and (b) I can't help myself--I have an insatiable craving for new knowledge. Steve Schafer Fenestra Technologies Corp. http://www.fenestra.com/

Maybe they just don't want to have to take graduate-level classes in category theory to get their job done.
If I wanted to purchase a large, complex, and unique, physical system (like a new bridge, say), you can be sure that I would employ an engineer who had taken graduate classes on all sorts of technical stuff related to bridge-building. Why should I accept that large, complex, and unique software systems should be built by non-experts, who shy away from education that could help them "get their job done" better? I imagine that if bridge-designers could insert a legal clause in their contract stating "THIS BRIDGE IS ACCEPTED AS IS, WITHOUT ANY WARRANTY..." then we might see fewer engineers bothering to take graduate classes as well. There is a myth that software is easy, and anything that seems to make it more difficult can be brushed off. But what if in fact the difficulties are inherent? Then tools that reveal the difficulty (and help to tame it) should be welcomed. Regards, Malcolm

On Sat, Jan 27, 2007 at 02:04:32PM -0500, Steve Schafer wrote:
This is not just a progrmaming issue; I encounter the same phenomenon pretty much everywhere: I'm currently trying to build a house, and I've found that most of the people who are in the business of building houses don't want to try anything new, even if it might be a better way that has the potential to save them money and/or result in a better end product. They want to continue building houses the way they've always built houses. The fraction of house builders who do want to learn new and possibly better ways of building is, and always will be, a small one.
This may change with the arrival of the first home building robots we've been reading about recently. They won't mind building in new ways, so the human builders will be forced to choose between becoming more flexible or changing their jobs. http://property.timesonline.co.uk/article/0,,14029-2546574,00.html Best regards Tomasz

On Jan 26, 2007, at 23:01 , Collin Winter wrote:
You have a PhD in computer science from Princeton, so your measure of what's "hard" and what isn't in this regard is nearly worthless.
Uh, I don't have a degree, and discussions about mathy stuff like category theory generally go flying way over my head. Somehow I still managed to learn Haskell and seem to be getting fairly good at it. -- brandon s. allbery [linux,solaris,freebsd,perl] allbery@kf8nh.com system administrator [openafs,heimdal,too many hats] allbery@ece.cmu.edu electrical and computer engineering, carnegie mellon university KF8NH

On 1/26/07, Collin Winter
I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done. Maybe they want a solution that meets them half-way, one that doesn't require that they understand how to build their own resistors and capacitors in order to make their TV work again (to use your analogy). That's what Meijer means when he says that Haskell is too hard.
I have an odd background in programming (never took a compsci class, did take a graduate computational physics course in my first year of undergrad, taught by an old nuclear physicist, who used nothing but Forth...beat that for bizarre), but since physicists represent an interesting conglomeration of snapshots of the mainstream of programming through the ages, I do have a bit of an archaeological perspective on how people think about "hard." In case the following seems far fetched, let me assure you that four years ago I worked in a high energy physics group on a codebase of a few hundred thousand lines of FORTRAN 77, with no documentation. It had GOTOs. The folks working on this regarded C as a newfangled, difficult language, and some of them were still writing FORTRAN IV or RATFOR. So here's my completely anecdotal view of the history of "hard" in programming: In the beginning there was GOTO. Well, actually there was machine language, then assembler, but we'll skip that. GOTO was king. And then there was this movement among the ivory tower computer scientists called "structured programming." It was considered difficult and complicated, with lots of hard concepts. And then the compsci departments made a concerted effort, taught a generation of students in the new style with no reference whatsoever to what came before. Structured programming suddenly became blasé. How else would you program? And then similar, though smaller and often partial repetitions of this break happened: designing nicer data structures when the Wirth languages came to their height; object oriented programming (which I regard as one of the great intellectual failures of programming). So when the kids are presented with exercises on functors, ADTs, and monads in intro compsci, and the professors pretend that it was always this way, there was never any other way, then they will grow up, they will go forth, and these will be the everyday things. Until then, I will continue to hear people say that map is a scary, unfamiliar object that makes the meaning of code obscure. -- Frederick Ross Graduate Fellow, (|Siggia> + |McKinney>)/sqrt(2) Lab The Rockefeller University

Frederick Ross wrote:
here's my completely anecdotal view of the history of "hard" in programming...
This history is accurate and insightful.
...when the kids... and the professors pretend that it was always this way... then they will grow up... Until then, I will continue to hear people say that [Haskell] is... scary
It does not have to wait until then. And it should not. The resulting mess across the entire software industry would really be a shame. Haskell is _not_ inherently hard - any more than any other programming language. But it is different. So right now, Haskell is hard only because we need more documentation that is designed to make Haskell seem easy. Literature for university students sometimes needs to give the message: "Things are not as easy as they look; you need to learn to think in new ways." Much of the Haskell literature is written in this style. Literature for programmers in industry needs to give the opposite message: "This is easy, even fun. You can start getting real work done right away. And then, look at all of the advantages." This interview illustrates how much people are noticing Haskell of late - Haskell is hot. If there were more written in the second style, a lot more people would start using Haskell. I think it can be done. Some nice work has been done in this direction, but it is only a start.
they characterize Haskell as being an impractical language, only useful for research... programming in Haskell is too hard... like assembly programming!
These are people whose opinions should not be taken lightly. So then, how can they say these things that to us seem so obviously false? The answer, in my opinion, is that it's not the language. It's the documentation. -Yitz

Haskell is _not_ inherently hard - any more than any other programming language. But it is different. So right now, Haskell is hard only because we need more documentation that is designed to make Haskell seem easy. Well, I think it's harder to get a program compiled in Haskell than in Java, for example. It's not too hard, although debugging might be a little difficult. I think this has to do with the type system. Althought it sounds bad, it's actually a good thing. Once you get a Haskell program compiled, chances are much higher that it's correct. You do a bit more work up front, but chances of a bug are way lower.
The way I see it, programming in Haskell is an investment. If you're from a OOP-background, some (trivial) things might take a lot more time. But in the end, it does make you more productive. You spend less time tracking bugs, and it's easier to refactor. Not everybody is willing to make an investment when they've got something that works. For example, if you know one imperative language, you can switch to another without taking too much risk. On the other hand, if you switch to a language that is completely different, you don't know if it will get your job done. It doesn't feel safe. -chris

On Sun, 2007-28-01 at 17:28 +0100, Chris Eidhof wrote:
For example, if you know one imperative language, you can switch to another without taking too much risk. On the other hand, if you switch to a language that is completely different, you don't know if it will get your job done. It doesn't feel safe.
I first looked at Haskell about... 2001? While I was still working in the grinding world of too-short this and too-little that. I looked at Haskell (and a whole bunch of other languages/technologies) because I knew that my imperative languages couldn't get the job done at all. I was facing increasingly unmanageable code bases (the Anti-Pattern in question was called "Lava Flow", paired with "Golden Hammer" for the most part) and the alternative languages I was looking at initially were just slight improvements on the situation. Haskell was one language I looked at, but in the end I rejected it and adopted Dylan instead. (Just in time to watch it die. :() And the biggest problem with Haskell? There was little in the way of useful and practical explication of The Haskell Way. I started, given that I could actually have the free time now, looking at Haskell again about a year ago. (It's a major point in Haskell's favour that it always stuck around in my mind after first encountering and rejecting it, incidentally!) The documentation situation is better now, yes. Far better. But still not useful for people who are actively developing software in the C++/Java/C#/crap-"solution"-of-the-day daily grind. Now, at least, there actually is useful information. Finding it, however, takes up valuable time when you've got Yet Another Laughable Deadline ahead of you from your boss who is still unfamiliar with the Triangle of Quality (Fast <=> Cheap <=> Good : pick any two) and who has no patience for your strange ideas which you can't explain because you yourself aren't sure of the ideas yet. And I'm positive that this drives people off. I'm positive because it drove me off and I'm far more prone to experimentation with oddball technologies than most of the people I've ever worked with. I'm very serious about the need for a "Haskell for the Working Programmer" book. And by this I mean a book and not a tutorial on some part of Haskell which proves difficult. And I'm also serious about potentially writing such a book myself when I get confident enough to work at it. Because despite some of the things I've said in this thread, I really like Haskell and what I see of as The Haskell Way. I want to see Haskell -- or something very, very, very similar -- take over from crap like C++. I'm thinking a structure along the lines of using YAHT as an opening section, followed by some of the Haskell for <Foo> Programmers information in a second, followed by a decent reference and closing with a cookbook of Haskell solutions to real-world problems (networking, database, general I/O, common algorithms, etc.) featuring Best Known Approaches (which remain, nonetheless approachable) to them. -- Michael T. Richter Email: ttmrichter@gmail.com, mtr1966@hotpop.com MSN: ttmrichter@hotmail.com, mtr1966@hotmail.com; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: mtr1966@jabber.cn "Thanks to the Court's decision, only clean Indians or colored people other than Kaffirs, can now travel in the trams." --Mahatma Gandhi

On 29/01/07, Michael T. Richter
I started, given that I could actually have the free time now, looking at Haskell again about a year ago. (It's a major point in Haskell's favour that it always stuck around in my mind after first encountering and rejecting it, incidentally!) The documentation situation is better now, yes. Far better. But still not useful for people who are actively developing software in the C++/Java/C#/crap-"solution"-of-the-day daily grind. Now, at least, there actually *is* useful information. Finding it, however, takes up valuable time when you've got Yet Another Laughable Deadline ahead of you from your boss who is still unfamiliar with the Triangle of Quality (Fast <=> Cheap <=> Good : pick any two) and who has no patience for your strange ideas which you can't explain *because you yourself aren't sure of the ideas yet*. And I'm positive that this drives people off. I'm positive because it drove me off and I'm far more prone to experimentation with oddball technologies than most of the people I've ever worked with.
Agreed. I'm in the same boat. I can't realistically use Haskell at work until I can pull together the building blocks *fast*. That means, half an hour grabbing a reference and fiddling a bit to get a trivial database query working. Another half hour checking up on how I bolt together the boilerplate needed to add logging to a server. Etc. It *has* to be possible to do this - I can do it in Python, or Perl, or even Java. So if I can't do it in Haskell, then I'm back stuck with one of those languages. I may not like it, but it's a practical reality in my job. I can try to learn at home (and I do!) but I don't have the same sort of issues at home, or the resources to simulate them (100+ databases running over a WAN that I need to query simultaneously, in parallel, just to provide viable response times, for example :-)). So home problems end up being "toy" problems, and I'm only learning for the intellectual exercise (a good reason, but not helping with work). I'm very serious about the need for a "Haskell for the Working Programmer"
book. And by this I mean a book and not a tutorial on some part of Haskell which proves difficult.
Agreed. Something I can keep on my desk for reference, as well as browsing. ... and closing with a cookbook of Haskell solutions to real-world problems
(networking, database, general I/O, common algorithms, etc.) featuring Best Known Approaches (which remain, nonetheless approachable) to them.
Yes, a cookbook approach is an excellent resource. It doesn't have to cover everything, but it needs to focus on practical problems, and "just do this, and you're started" solutions. The other thing that is needed, IMHO - and this is visibly improving, but can still get better - is availability and ease of use of libraries. I'd like to be able to "just download" a Windows binary for libraries I need - logging, DB access, HTTP, client access to Windows COM objects, a GUI framework, sending emails, etc ... Again, it's the "I need it now" issue - I can spend time searching out or even developing such things, but when I don't even have the time to write the code I need for my job (hence wanting a high-level language like Haskell to speed up development :-)) what chance do I have if I have to spend time locating libraries as well? Python does well here with its "batteries included" approach, and the cheeseshop package index. Perl has CPAN, but quality and ease of installation issues have hit me there in the past. Haskell seems to be getting there, but it feels to me like there's a "critical mass" point it still has to get over. But there's clearly an interest in the community for this type of thing, so I suspect it's only a matter of time. I'll stop rambling now. Paul.

Hello Paul, Monday, January 29, 2007, 5:06:42 PM, you wrote:
I'm very serious about the need for a "Haskell for the Working Programmer" book. And by this I mean a book and not a tutorial on some part of Haskell which proves difficult.
Agreed. Something I can keep on my desk for reference, as well as browsing.
how about developing a "contents" for this book and for cookbook? just a list of problems you want to learn how to solve. perhaps, then someone can fill some chapters. haskellwiki is a good place to start it -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Wed, 2007-31-01 at 03:25 +0300, Bulat Ziganshin wrote:
I'm very serious about the need for a "Haskell for the Working Programmer" book. And by this I mean a book and not a tutorial on some part of Haskell which proves difficult.
Agreed. Something I can keep on my desk for reference, as well as browsing.
how about developing a "contents" for this book and for cookbook? just a list of problems you want to learn how to solve. perhaps, then someone can fill some chapters. haskellwiki is a good place to start it
I disagree with this part. Books written by committee lack cohesion unless they have an overbearing editor at the helm. What I've seen on the Wiki as regards idioms, standard practices, etc. -- and this is true of every language wiki I've ever seen -- is one proposed answer and then a chain of "but you could do it this way instead" with ever-increasingly-obscure ways of doing things. Which would be the precise opposite of what a "Haskell for the Working Programmer" book should be like. A book like this should be clear, straightforward and provide an introduction to Haskell for a working programmer, but an introduction that gets said programmer up and on the job quickly. After using the (possibly even less than ideal) solutions provided in the book, the reader can then comfortably hone the newfound skills provided. I do like the idea of developing a table of contents first and backfilling it, though. I would amend the process, however, to avoid the WikiBloat that seems to inevitably follow when documentation projects get too open. Instead of Wikifying it, I'd suggest instead that a "call for proposals" be put on the mailing list. "I'm working on a chapter dealing with database programming. I need to know how to do <insert whatever> in Haskell. Could anybody interested in helping please submit some commented, functioning code for possible inclusion?" Then the submissions could be made by email (and possibly even discussed on-list) and the editor/author of the book can take an executive decision and choose one if there are competing camps. -- Michael T. Richter Email: ttmrichter@gmail.com, mtr1966@hotpop.com MSN: ttmrichter@hotmail.com, mtr1966@hotmail.com; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: mtr1966@jabber.cn "I have never seen the poor people being so familiar with their heads of state as they were with [Michele Duvalier]. It was a beautiful lesson for me. I've learned something from it." --Mother Theresa

[I apologize for odd quoting, but I dislike sending html emails] "I do like the idea of developing a table of contents first and backfilling it, though. I would amend the process, however, to avoid the WikiBloat that seems to inevitably follow when documentation projects get too open. Instead of Wikifying it, I'd suggest instead that a "call for proposals" be put on the mailing list. "I'm working on a chapter dealing with database programming. I need to know how to do <insert whatever> in Haskell. Could anybody interested in helping please submit some commented, functioning code for possible inclusion?" Then the submissions could be made by email (and possibly even discussed on-list) and the editor/author of the book can take an executive decision and choose one if there are competing camps." Instead of having someone work in solitude with occasional mailings back and forth on the list, I would rather have an open wiki for the collection of ideas from everyone. Then, if you really wanted, a single person can use those to create an 'editor edition' page that is only editable by them. Best of both worlds, or do you see it differently? -- Joe Re

On 31/01/07, Joe Re
Instead of having someone work in solitude with occasional mailings back and forth on the list, I would rather have an open wiki for the collection of ideas from everyone. Then, if you really wanted, a single person can use those to create an 'editor edition' page that is only editable by them. Best of both worlds, or do you see it differently?
That's roughly how the O'Reilly "Python Cookbook" was done - open website for recipes, with the "best" taken and tidied up, filled out and published as a book by an editorial team. From my POV it worked well - the book is an excellent resource, and the website is still going strong as well. Paul.

Hello Michael, Wednesday, January 31, 2007, 4:50:17 AM, you wrote:
I disagree with this part. Books written by committee lack cohesion unless they have an overbearing editor at the helm.
i've provided several urls of wikipages written by us together. and this pages seems to be very popular and good appreciated -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On 31/01/07, Michael T. Richter
I disagree with this part. Books written by committee lack cohesion unless they have an overbearing editor at the helm. What I've seen on the Wiki as regards idioms, standard practices, etc. -- and this is true of every language wiki I've ever seen -- is one proposed answer and then a chain of "but you could do it this way instead" with ever-increasingly-obscure ways of doing things. Which would be the precise *opposite* of what a "Haskell for the Working Programmer" book should be like.
I agree up to a point - there's a huge risk that increasingly "advanced" solutions will obscure the point. A reasonable amount of editorial control could keep this in check, though. A book like this should be clear, straightforward and provide an *
introduction* to Haskell for a working programmer, but an introduction that gets said programmer up and on the job quickly. After using the (possibly even less than ideal) solutions provided in the book, the reader can then comfortably hone the newfound skills provided.
I do like the idea of developing a table of contents first and backfilling it, though. I would amend the process, however, to avoid the WikiBloat that seems to inevitably follow when documentation projects get too open. Instead of Wikifying it, I'd suggest instead that a "call for proposals" be put on the mailing list. "I'm working on a chapter dealing with database programming. I need to know how to do <insert whatever> in Haskell. Could anybody interested in helping please submit some commented, functioning code for possible inclusion?" Then the submissions could be made by email (and possibly even discussed on-list) and the editor/author of the book can take an executive decision and choose one if there are competing camps.
Possibly. From my own point of view, though, I'm usually only after fairly simple snippets of code - certainly not enough for a book chapter (although they could be grouped into chapters by topic). Here's a slightly extended example: a key snippet for me would be "execute a query against a database and get the results". Naively, I see this as an IO action, much like reading a file. So I'd expect something like connect :: String -> String -> String -> IO dbHandle connect user password db = {- some magic to connect and give me a DB handle -} close :: dbHandle -> IO () close dbh = {- magic to close the handle -} query :: dbHandle -> String -> IO [String] query dbh sql = {- more magic to do the query and return results -} I'd use this as do dbh <- connect "system" "manager" "my_oracle_db" results <- query dbh "select * from all_users" close dbh return results (I'd see it as being in the IO monad, so that I can intersperse other IO actions. Maybe that's not necessary, but that's a different recipe - how do I intersperse actions in 2 different monads - that I'd class as quite advanced). It's not remotely functional, it's likely not to be "the best way" and it's probably making the advanced users squirm, but that's not the point. It fits my beginner's mental model - I'm new to Haskell, I've grasped the concept of IO and probably a little bit of how monads are great for structuring code (I'd probably write my own "auto-close" control structure for this - and soon after, find that it already exists!) but I'm not really worried. To me, Haskell is brilliant for slinging data around in clever ways. So this is just annoying boilerplate I need to get the data into my program - it doesn't even have to be efficient, because I'm only grabbing a few tens of rows here (getting data lazily would be a *separate* recipe, that I'd look at for the million-row query I'll try once I feel like getting more advanced). The point of this is that to me, pulling data out of databases is *boring*. The interesting bit is manipulating that data - and that's the bit that drew me to Haskell because other languages make the data manipulation tedious and boring as well. So I want easily tailorable snippets of code to get me past the boring stuff - I'm busy learning interesting things elsewhere in Haskell at the moment. Of course, in the above you could easily substitute any number of tasks for "querying a database". Grabbing web pages, displaying a GUI, etc. The point is the same - they aren't (yet) the interesting bit. Sorry, that went on a bit - I hope it was useful nevertheless. Paul.

On 31/01/07, Michael T. Richter
I disagree with this part. Books written by committee lack cohesion unless they have an overbearing editor at the helm.
While that might be a problem for a 'Haskell for the Working Programmer Book' it wouldn't be a problem at all for a Cookbook style book (which I believe is what the OP was referring to here) IMO. The point of the Cookbook is that all the 'recipes' should be self contained as far as is practical, they are 'dipping in' books rather than 'read right through' books, and in this case the cohesion isn't so important. As I understand it, the Python Cookbook (published by O'Reilly) was created in large part from user contributions ( http://aspn.activestate.com/ASPN/Python/Cookbook/ ). Rob

On Fri, 2007-26-01 at 22:01 -0600, Collin Winter wrote:
I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done.
That would be a good way to think, yes. I look for practical solutions to real problems in my work. Haskell looks like it has these solutions, but extracting them from the noise level of ivory tower debates is a real problem. I think many of the users of Haskell forget that there are a lot of people out there who are not career academics working with pure mathematics day-in and day-out. My last math class? Was over fifteen years ago. And in, you know, the real world of programming you don't face mathematical problems as your bread and butter. You face problems in a messy world of too-short deadlines, too-few resources, too-poorly-communicated requirements and too-many-hours work. I would like to see a guide to Haskell that shows Haskell's place in those problems, not in Yet Another Elegant Mathematical Proof of the utterly useless in real life. Now given my privileged position of having escaped that ghetto of "toos" five years ago, I actually have the time and the energy to work this stuff out on my own. But your average working programmer? Just doesn't have the damned time to decode the arcane and obtuse wording and mathematics to get to what in the end turns out to be concepts verging on the trivial. (I'm looking at monads here....) Maybe I'm the one that has to write the book "Haskell for the Working Programmer" sometime. You know. When I understand the language enough to write it. -- Michael T. Richter Email: ttmrichter@gmail.com, mtr1966@hotpop.com MSN: ttmrichter@hotmail.com, mtr1966@hotmail.com; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: mtr1966@jabber.cn "I have never seen the poor people being so familiar with their heads of state as they were with [Michele Duvalier]. It was a beautiful lesson for me. I've learned something from it." --Mother Theresa

Hi, As I am taking a break from writing code and doing laundry, here are my thoughts. Restating the obvious: I agree with you that it is amazing how use of the word "Monad" has brought out so many people's feeling towards math. Obligatory disclaimer: Like many people, I have learned to write code in Haskell and I never took a course in Category theory. Nor do I have a CS degree. My math and physics degrees were not applicable to learning Haskell. I had never use a similar type system or type inferencing. The combination of the new type system and the new approach to IO took a little while to sink in. (I had learned BASIC, Pascal, Turbo Pascal with Objects, C, some Fortran, C++, Scheme, Java) But I think the biggest advance in Haskell is the explicit separation of IO operations enforced by the type system / compiler. Other languages may get type inference and may get closer to parametric polymorphism (java generics) and they may even get STM but they are unlikely to be able to evolve this IO separation and maintain much backward compatibility. Michael T. Richter wrote:
On Fri, 2007-26-01 at 22:01 -0600, Collin Winter wrote:
I find it incredibly insulting for you to assert that people who complain about Haskell's difficulty are too lazy and aren't really interested in a better solution. Maybe they just don't want to have to take graduate-level classes in category theory to get their job done.
It is probably true that having such a course would let you directly understand the mathematical structure. But such understanding is different from knowing how to read and write Haskell programs.
That would be a good way to think, yes. I look for practical solutions to real problems in my work. Haskell looks like it has these solutions, but extracting them from the noise level of ivory tower debates is a real problem.
There are CS PhD people who publish papers and who talk about Haskell. These are largely irrelevant to reading and writing Haskell. Some papers are relevant to designing extensions to Haskell, but unless you are trying to build a new Haskell compiler extension these are not needed.
I think many of the users of Haskell forget that there are a lot of people out there who are not career academics working with pure mathematics day-in and day-out.
GHC seems to be developed by several people at Microsoft Research. They are not career academics. I am less familiar with the other compilers. For example: The latest extension proposal on haskell-cafe mailing list is about View patterns. This is all about making the code easier and clearer to write and to encourage interfaces that hide implementation details. That is about practical changes to help people and not about academics.
My last math class? Was over fifteen years ago. And in, you know, the real world of programming you don't face mathematical problems as your bread and butter.
If you have already created the algorithm or data structure then you are right. If you can simply reuse existing libraries and solutions you are right. But creating algorithms is a type of math. And Haskell exposes this better than most languages.
You face problems in a messy world of too-short deadlines, too-few resources, too-poorly-communicated requirements and too-many-hours work.
The bottom line: Learning Haskell without a background in similar languages takes time. If you cannot take that time from the world, then Haskell will be very hard to acquire. The previous post about educating the next generation at Universities makes the same point. Aside: One could make a good case that there there is a synergy between learning Haskell and learning the use the up and coming features of Visual Basic (LINQ) and C# (STM) and Java (Generics vs Type Classes and Parametric Polymorphism) and even Perl 6 (since it is taking so many ideas from Haskell). And thus taking the time to learn Haskell is useful. Haskell is not the one and only cutting edge source of such language features, but it does let you use many of them simultaneously and has good compilers available today. Note that I have not mentioned laziness. This is because it only helps to solve problems more elegantly -- other languages can model infinite computations / data structures when it is useful to do so.
I would like to see a guide to Haskell that shows Haskell's place in those problems, not in Yet Another Elegant Mathematical Proof of the utterly useless in real life.
Perhaps: http://haskell.org/haskellwiki/Why_Haskell_Matters There are many learning links at hakell.org, though many resources may be newer than when you learned Haskell: http://haskell.org/haskellwiki/Haskell_in_5_steps http://haskell.org/haskellwiki/Learning_Haskell (such as "Haskell Tutorial for C Programmers") http://haskell.org/haskellwiki/Books_and_tutorials (such as "Programming in Haskell" published January 31, 2007) (such as http://haskell.org/haskellwiki/Hitchhikers_Guide_to_the_Haskell ) The ivory tower stuff you want to avoid is at: http://haskell.org/haskellwiki/Research_papers Aside on utterly useful proofs: When you write concurrent programs you want an API with strong and useful guarantees so you can avoid deadlocks and starvation and conflicting data read/writes. Designing an using such an API is a reasoning exercise identical to creating proofs. Some systems makes this possible and even easy (such as using STM).
Now given my privileged position of having escaped that ghetto of "toos" five years ago, I actually have the time and the energy to work this stuff out on my own. But your average working programmer? Just doesn't have the damned time to decode the arcane and obtuse wording and mathematics to get to what in the end turns out to be concepts verging on the trivial. (I'm looking at monads here....)
Right. If it had been called the "Binding Chain Design Pattern" then no one would have gone nuts over the fact it had anything to do with advanced math. But this design pattern was not trivial: it took years to standardize on using a Monad for IO in Haskell. This may have been a result of too small a community of users.
Maybe I'm the one that has to write the book "Haskell for the Working Programmer" sometime. You know. When I understand the language enough to write it.
You could start small by adding sections to the learning and tutorial pages at the wiki... -- Chris

On 1/28/07, Chris Kuklewicz
I think many of the users of Haskell forget that there are a lot of people out there who are not career academics working with pure mathematics day-in and day-out.
GHC seems to be developed by several people at Microsoft Research. They are not career academics. I am less familiar with the other compilers.
There's no need for the "seems to be". The excellent "History of Haskell" paper: http://research.microsoft.com/~simonpj/papers/history-of-haskell/index.htm discusses the history of every extant Haskell compiler, in section 9. If I'm reading correctly, every one of these compilers (except possibly for jhc, it's not clear) began as an academic research project at a university -- that includes GHC, which was developed by academics at the University of Glasgow (hence the "G" in "GHC"), and it was only later that some of them moved to MSR. Be careful about conflating "academic" with "not practical". It's true that academics haven't always had time to explain the useful, practical techniques they discover in ways that are understandable by programmers who don't have formal mathematical backgrounds (and be careful about conflating "having a PhD" with "having mathematical background or experience", too). In the same way that programmers have jobs to do and thus have limited time to puzzle out new languages, academics have jobs that don't tend to reward them for spending time making those puzzles clearer to practitioners. As a challenge to everyone posting on this thread: rather than excoriating academia for its sins, why not start creating the documentation (or tutorials or libraries or applications) you wish to see? Cheers, Kirsten -- Kirsten Chevalier* chevalier@alum.wellesley.edu *Often in error, never in doubt "No one's actually said 'O great America, thank you for saving us from the evil communist bug-eyed aliens, and, can we have fries with that?' yet have they?" -- Debra Boyask

Chris Kuklewicz wrote:
Note that I have not mentioned laziness. This is because it only helps to solve problems more elegantly -- other languages can model infinite computations / data structures when it is useful to do so.
Reminds me of yet another quote from Dijkstra (http://www.cs.utexas.edu/users/EWD/transcriptions/EWD12xx/EWD1284.html): "After more than 45 years in the field, I am still convinced that in computing, elegance is not a dispensable luxury but a quality that decides between success and failure." Ben

Chris Kuklewicz wrote:
Aside on utterly useful proofs: When you write concurrent programs you want an API with strong and useful guarantees so you can avoid deadlocks and starvation and conflicting data read/writes. Designing an using such an API is a reasoning exercise identical to creating proofs. Some systems makes this possible and even easy (such as using STM).
I have to -- partly -- disagree with your last statement. It is possible that I missed something important but as I see it, STM has by construction the disadvantage that absence of starvation is quite hard (if not impossible) to prove. A memory transaction must be re-started from the beginning if it finds (when trying to commit) another task has modified one of the TVars it depends on. This might mean that a long transaction may in fact /never/ commit if it gets interrupted often enough by short transactions which modify one of the TVars the longer transaction depends upon. IIRC this problem is mentioned in the original STM paper only in passing ("Starvation is surely a possibility" or some such comment). I would be very interested to hear if there has been any investigation of the consequences with regard to proving progress guarantees for programs that use STM. My current take on this is that there might be possibilities avoid this kind of starvation by appropriate scheduling. One idea is to assign time slices dynamically, e.g. to double it whenever a task must do a rollback instead of a commit (due to externally modified TVars), and to reset the time slice to the default on (successful) commit or exception. Another possibility is to introduce dynamic task priorities and to increase priority on each unsuccessful commit. (Unfortunately I have no proof that such measures would remove the danger of starvation, they merely seem plausible avenues for further research. I also don't know whether the current GHC runtime system already implements such heuristics.) Cheers Ben

Benjamin Franksen wrote:
Chris Kuklewicz wrote:
Aside on utterly useful proofs: When you write concurrent programs you want an API with strong and useful guarantees so you can avoid deadlocks and starvation and conflicting data read/writes. Designing an using such an API is a reasoning exercise identical to creating proofs. Some systems makes this possible and even easy (such as using STM).
I have to -- partly -- disagree with your last statement. It is possible that I missed something important but as I see it, STM has by construction the disadvantage that absence of starvation is quite hard (if not impossible) to prove. A memory transaction must be re-started from the beginning if it finds (when trying to commit) another task has modified one of the TVars it depends on. This might mean that a long transaction may in fact /never/ commit if it gets interrupted often enough by short transactions which modify one of the TVars the longer transaction depends upon.
True. The STM API ensures the program will be deadlock free, the mutable state is consistent, and that progress will always be made (i.e. _some block_ will commit). The current STM implementation does not otherwise ensure fairness or avoid starvation. It is clever enough to put retrying blocks to sleep until one of the TVars they read is changed. But it does not have the single-wake up FIFO fairness of MVars.
IIRC this problem is mentioned in the original STM paper only in passing ("Starvation is surely a possibility" or some such comment). I would be very interested to hear if there has been any investigation of the consequences with regard to proving progress guarantees for programs that use STM.
The STM API ensures progress for the program as a whole (i.e. commits). If a block is failed because of a conflict this is because another block has committed. If all the threads of the program call retry then the program is stuck, but that is not STM's fault, and the STM scheduler will kill the program in this case rather than hang.
My current take on this is that there might be possibilities avoid this kind of starvation by appropriate scheduling. One idea is to assign time slices dynamically, e.g. to double it whenever a task must do a rollback instead of a commit (due to externally modified TVars), and to reset the time slice to the default on (successful) commit or exception.
With the new GHC using multiple CPU's this would not prevent a fast block from running & committing on a different CPU and causing the long block to retry.
Another possibility is to introduce dynamic task priorities and to increase priority on each unsuccessful commit.
(Unfortunately I have no proof that such measures would remove the danger of starvation, they merely seem plausible avenues for further research. I also don't know whether the current GHC runtime system already implements such heuristics.)
Cheers Ben
What you need to "increase fairness/priority" for a block is to run that block until it tries to commit while keeping other blocks that would interfere from committing. The scheduler could take the runtime of a failed block into account when re-attempting it. It could lock out other blocks that might interfere with a very-long-delayed much-re-attempted-block until it has had a change to commit. (Since the scheduler knows which TVars the problem block has needed in the past any independent blocks can still run). To keep the progress guarantee: if the problem block takes a very very long time it must be timed out and another block be allowed to commit. This partitioning into "fast" vs "slow" blocks is merely based on the observations of the runtime scheduler.

On Sun, 2007-28-01 at 15:39 +0000, Chris Kuklewicz wrote:
Right. If it had been called the "Binding Chain Design Pattern" then no one would have gone nuts over the fact it had anything to do with advanced math.
But this design pattern was not trivial: it took years to standardize on using a Monad for IO in Haskell. This may have been a result of too small a community of users.
No design pattern worth using (under any paradigm!) was trivial to create. But you're right on the head with the naming. Calling it a "Monad" and referring to an "obscure branch of mathematics" every time you talk about it is just not the way to sell the pattern. My 2001-or-so rejection of Haskell had a lot to do with running into that "obscure branch" phrase far too often for my own mental health.
Maybe I'm the one that has to write the book "Haskell for the Working Programmer" sometime. You know. When I understand the language enough to write it.
You could start small by adding sections to the learning and tutorial pages at the wiki...
I could, yes. Except the stuff that I can document there is already documented. Actually much of what I think Haskell needs documented is already documented. It just needs to be put together into a coherent whole so prospective users aren't left sifting through a myriad of sources to find the few understandable (to them) gems. And having a cookbook of practical solutions wouldn't hurt either along the lines of the Ruby Cookbook or Python Cookbook or, hell, even the MySQL Cookbook. -- Michael T. Richter Email: ttmrichter@gmail.com, mtr1966@hotpop.com MSN: ttmrichter@hotmail.com, mtr1966@hotmail.com; YIM: michael_richter_1966; AIM: YanJiahua1966; ICQ: 241960658; Jabber: mtr1966@jabber.cn "Sexual organs were created for reproduction between the male element and the female element -- and everything that deviates from that is not acceptable from a Buddhist point of view. Between a man and man, a woman and another woman, in the mouth, the anus, or even using a hand." --The Dalai Lama

Michael T. Richter wrote:
And in, you know, the real world of programming you don't face mathematical problems as your bread and butter.
Can you prove that? ;)
You face problems in a messy world of too-short deadlines, too-few resources, too-poorly-communicated requirements and too-many-hours work.
In it's essence, the way of mathematics is to solve an infinite number of problems at once by generalizing and simplifying them until they read "1 == 1". But that's exactly the kind of stuff you need: thanks to generalization, you already implemented all requirements before the customer can even conceive them and thanks to simplification, needed resources and hours of work shrink to reasonable amounts resulting in deadlines becoming harmless :) Well, i mean it seriously. - Coding a complicated configuration system, no doubt baroque, can it be simplified by basing it on a simple but powerful macro language with simple and sane semantics? Can it be based on the lambda calculus? Is this general enough? Maybe you want to assure that every macro terminates? - Coding a graphical user interface with lots of forms, can they be reduced to their essence and generated automatically from a data type? Perhaps in the style of Functional Forms (www.st.cs.ru.nl/papers/2005/eves2005-FFormsIFL04.pdf)? Are they general enough? If they require immediate feedback or interactive error checking, may tangible values (http://conal.net/papers/Eros/) be an idea to base on? - Coding a dynamic website and having to control caching and data mining, can this logic be separated out, restricting yourself to a programming model that allows those this to happen transparently? Can you plunder Google's Map Reduce model for that (www.cs.vu.nl/~ralf/MapReduce/paper.pdf)? - Coding data base access or a package management system, can data integrity be assured by again restricting yourself to a less general programming model? Like Software Transactional Memory? Or is it just enough to use strong typing and a simple yet clever data structure (http://www.galois.com/cufp/slides/2006/CliffordBeshers.pdf)? The structures behind the repetitions, the generalizations to rule them all, the simplifications to find them, they all lie there. But they may resist discovery and you may need cleverness and, well, mathematics to find them. The point about Haskell is that its type system is pure and rich enough to enable you to actually express the proof, the insight as a program. Only few programming languages can do that. And you know: computers and Haskell itself are products of mathematics. Regards, apfelmus

Hello Steve, Friday, January 26, 2007, 10:03:09 PM, you wrote:
Haskell _is_ hard, although I don't think it's _too_ hard, or I wouldn't ...
The audience for programming languages like Haskell is always going to be small, because it appeals to those who want to understand how the TV works,
i don't think so :) imho, we just don't have good _teachers_. in 70's, OOP audience was also small, but it was popularized later and now every student know about polymorphism via inheritance. but most of OOP programmers don't reinvent the wheels, they just use "patterns" described in OOP bestselling books i have a positive experience of making "complex" concepts easy and available for wide audience ([1]-[5]), [1] was even used to teach students in some college. and i guess that good Haskell books, such as yaht and printed ones, also make it easy to learn Haskell. but we need to gather much more attention to Haskell to make it as "patternized" as structured-programming and OOP. _nowadays_ there is no even one "advanced Haskell" or "Haskell in Real World" book and this means that anyone who want to learn Haskell in deep should study those terrible papers (well, it's very like higher education in Russia - no one really teaches you at our colleges so you should either learn yourself or die :) but this means that at least whose who still alive, are Real Machos :) the same apply to Haskell - now the only way to learn it is to learn yourself, so we all definitely are cool mans. once i even got C# job offer only because i know Haskell :) [1] http://haskell.org/haskellwiki/IO_inside http://haskell.org/haskellwiki/OOP_vs_type_classes http://haskell.org/haskellwiki/Modern_array_libraries http://haskell.org/bz/th3.htm http://haskell.org/bz/thdoc.htm -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
participants (30)
-
Andrew Wagner
-
apfelmus@quantentunnel.de
-
Benjamin Franksen
-
Bill Wood
-
Brandon S. Allbery KF8NH
-
Brian Hulley
-
Bulat Ziganshin
-
Chris Eidhof
-
Chris Kuklewicz
-
Collin Winter
-
Dan Weston
-
Doaitse Swierstra
-
dons@cse.unsw.edu.au
-
Dougal Stanton
-
Frederick Ross
-
Joe Re
-
Ketil Malde
-
Kirsten Chevalier
-
Magnus Therning
-
Malcolm Wallace
-
mark@ixod.org
-
mgsloan
-
Michael T. Richter
-
Neil Bartlett
-
Paul Moore
-
Rob Crowther
-
Seth Gordon
-
Steve Schafer
-
Tomasz Zielonka
-
Yitzchak Gale