Why functional programming matters

Friends Over the next few months I'm giving two or three talks to groups of *non* functional programmers about why functional programming is interesting and important. If you like, it's the same general goal as John Hughes's famous paper "Why functional programming matters". Audience: some are technical managers, some are professional programmers; but my base assumption is that none already know anything much about functional programming. Now, I can easily rant on about the glories of functional programming, but I'm a biased witness -- I've been doing this stuff too long. So this message is ask your help, especially if you are someone who has a somewhat-recent recollection of realising "wow, this fp stuff is so cool/useful/powerful/etc". I'm going to say some general things, of course, about purity and effects, modularity, types, testing, reasoning, parallelism and so on. But I hate general waffle, so I want to give concrete evidence, and that is what I particularly want your help with. I'm thinking of two sorts of "evidence": 1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good. One of my favourite examples is quicksort, for reasons explained here: http://haskell.org/haskellwiki/Introduction#What.27s_good_about_functional_p... But I'm sure that you each have a personal favourite or two. Would you like to send them to me, along with a paragraph or two about why you found it compelling? For this purpose, a dozen lines of code or so is probably a maximum. 2. War stories from real life. eg "In company X in 2004 they rewrote their application in Haskell/Caml with result Y". Again, for my purpose I can't tell very long stories; but your message can give a bit more detail than one might actually give in a presentation. The more concrete and specific, the better. E.g. what, exactly, about using a functional language made it a win for you? If you just reply to me, with evidence of either kind, I'll glue it together (regardless of whether I find I can use it in my talks), and put the result on a Wiki page somewhere. In both cases pointers to blog entries are fine. Quite a lot of this is FP-ish rather than Haskell-ish, but I'm consulting the Haskell mailing lists first because I think you'll give me plenty to go on; and because at least one of the talks *is* Haskell-specific. However, feel free to reply in F# or Caml if that's easier for you. Thanks! Simon

Simon Peyton-Jones wrote:
Friends
Over the next few months I'm giving two or three talks to groups of *non* functional programmers about why functional programming is interesting and important. If you like, it's the same general goal as John Hughes's famous paper "Why functional programming matters".
Audience: some are technical managers, some are professional programmers; but my base assumption is that none already know anything much about functional programming.
Now, I can easily rant on about the glories of functional programming, but I'm a biased witness -- I've been doing this stuff too long. So this message is ask your help, especially if you are someone who has a somewhat-recent recollection of realising "wow, this fp stuff is so cool/useful/powerful/etc". [...]
I think I qualify for this: I've been a long time C coder (and still do some), doing mostly UNIX/system stuff, most notably working on the FreeBSD kernel. I only recently (1 year, maybe one and a half) started learning Haskell so I still have fresh memories about my first feelings. One of the things that particularly impressed me was parametric polymorphism. As a C programmer, you tend to rewrite code to deal with linked lists every time you need one, adding next pointers to some structure of yours. This kind of things get boring fast (one could also use the BSD sys/queue.h macros, but they're ugly). So when I discovered about parametric polymorphism, and how you can have, for instance, a "length :: [a] -> Int" function working for any kind of lists, I was _very_ impressed. In a way, it is all perfectly logical to be able to have this kind of functions, since length doesn't need to know anything about what the list is holding, but it doesn't make it any less impressive for a C coder. Still on the parametric polymorphism topic, the Maybe datatype is what impressed me next. Everyone that has done some amount of C in his life knows how boring it is to have functions returning errors to the caller. If you're lucky, your function returns a pointer and can thus return NULL in case something went wrong. If your function returns an int and -1 has no meaning, you can use that too. But if you don't, it becomes messy quickly. It's even more annoying when the standard got it wrong, for instance the dreaded atoi() function. So, when I discovered the Maybe datatype, I was like a kid opening presents at christmas :-). The extreme simplicity of the definition of the Maybe datatype is impressive in itself, its convenience is as well, and the fact that it prevents a whole class of bugs by making it (nearly) impossible to ignore the fact that the function can fail is the cherry on top. Here are my $0.02 :-). I hope they'll be useful to you. This is mostly emotional and subjective stuff rather than technical, but I believe this is also what you're looking after. Maxime

On Wed, 23 Jan 2008, Simon Peyton-Jones wrote:
1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good. One of my favourite examples is quicksort, for reasons explained here: http://haskell.org/haskellwiki/Introduction#What.27s_good_about_functional_p...
But I'm sure that you each have a personal favourite or two. Would you like to send them to me, along with a paragraph or two about why you found it compelling? For this purpose, a dozen lines of code or so is probably a maximum.
With respect to the other thread about exception handling, I can add that C++ programmers strongly disagree on whether exceptions or error return codes are the right way. Also Niklaus Wirth considered exceptions to be the reincarnation of GOTO and thus omitted them in his languages. Maybe the programmers like to hear that Haskell solves the problem the diplomatic way: Function return error codes, but the handling of error codes does not uglify the calling code. Maybe the programmers can recognize the power of the language, if you show them that Haskell does not need additional language support in order to implement classical exception handling. I have extended the article on the Wiki by a short example implementation of exceptions: http://www.haskell.org/haskellwiki/Exception

On Wed, 2008-01-23 at 13:29 +0000, Simon Peyton-Jones wrote:
1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good.
Hello, You might want to look at this article: http://www.ibm.com/developerworks/java/library/j-cb01097.html and other articles from Developer Works that deal with functional programming. They are written by programmers that work in the industry and usually have imperative background. These articles should be very useful for someone who wants to explain functional programming to newbies as they use vocabulary and examples meaningful for imperative programmers. Cheers, Michał

Here are things I liked most (compared with standard imperative languages) when I started to learn functional programming: * algebraic types with pattern matching work nicely as "tagged unions"; doing a tagged union manually in C/C++/C# is a pain (there is no automatic tag (provided you dismiss rtti over objects)) * lambda expressions, lexical scoping, closures; doing this in C++ is just incredible amount of typing; it is a bit better in C# but still not good enough I happen to have nice examples of code like: map (\x->x*2) [1,2,3] -- but with an array in C/C++/ML/C#. Each written as natively as possible. They are attached. Other things did not seem that great for me from the beginning. For example: referential transparency - just enforces what you can take care not to do yourself (e.g. in C# you just cannot be sure some function is referentially transparent even when comment claims so - which of course sucks because programmers are not disciplined). Or another example: nontrivial higher order functions looked more damaging than helping because understanding usage/interactions is harder. Not mentioning monads (user level understanding of uniqueness typing with all the annotations is easier than monads). I found ML/Clean/Haskell much more appealing than the common imperative languages an even Prolog (with which I could produce usably quick code only after actually using intimate knowledge of the inference and sprinkling cuts around, but that does not feel declarative to me any more then). Peter.

On 1/23/08, Peter Hercek
Other things did not seem that great for me from the beginning. For example: referential transparency - just enforces what you can take care not to do yourself
...if you never make mistakes, that is.
(e.g. in C# you just cannot be sure some function is referentially transparent even when comment claims so - which of course sucks because programmers are not disciplined).
But if that's the point you're trying to make, I agree that a lot of programmers seem to think they don't make mistakes, and thus might not be receptive to the siren song of referential transparency :-) Cheers, Tim -- Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in doubt "You never have to aspire to difficulty, darling. It arrives, uninvited. Then it stays for dinner."--Sue Miller

catamorphism:
On 1/23/08, Peter Hercek
wrote: Other things did not seem that great for me from the beginning. For example: referential transparency - just enforces what you can take care not to do yourself
...if you never make mistakes, that is.
(e.g. in C# you just cannot be sure some function is referentially transparent even when comment claims so - which of course sucks because programmers are not disciplined).
But if that's the point you're trying to make, I agree that a lot of programmers seem to think they don't make mistakes, and thus might not be receptive to the siren song of referential transparency :-)
Yes, we have to talk more about it making the job "faster and easier", than "safer". It's a particular kind of programmer that likes things to be safer (i.e. people on this list :), but all kinds want their job to be faster and easier :) -- Don

On Wed, 2008-01-23 at 15:42 -0800, Don Stewart wrote:
catamorphism:
On 1/23/08, Peter Hercek
wrote: Other things did not seem that great for me from the beginning. For example: referential transparency - just enforces what you can take care not to do yourself
...if you never make mistakes, that is.
(e.g. in C# you just cannot be sure some function is referentially transparent even when comment claims so - which of course sucks because programmers are not disciplined).
But if that's the point you're trying to make, I agree that a lot of programmers seem to think they don't make mistakes, and thus might not be receptive to the siren song of referential transparency :-)
Yes, we have to talk more about it making the job "faster and easier", than "safer". It's a particular kind of programmer that likes things to be safer (i.e. people on this list :), but all kinds want their job to be faster and easier :)
Be careful, though, this is only convincing against Java and C. Python, Ruby, Perl have the same argument. Compared to those, using Haskell might in fact be slower and harder--at least in the beginning. This is where Haskell's static typing enters the game and automates many of the boring and error prone tasks, like finding all the use sites of a type (just change it and let the compiler tell you). If you use 'newtype' and maybe some more advanced type checker features (MPTCs, Existentials) you can let the type-checker work for you (cf. "Lightweight Static Capabilities"). This is where Haskell could actually beat all those "dynamic" languages, which otherwise seem to be much easier to pick up for imperative programmers. Then there is concurrency of course ...

Tim Chevalier wrote:
On 1/23/08, Peter Hercek
wrote: Other things did not seem that great for me from the beginning. For example: referential transparency - just enforces what you can take care not to do yourself
...if you never make mistakes, that is.
(e.g. in C# you just cannot be sure some function is referentially transparent even when comment claims so - which of course sucks because programmers are not disciplined).
But if that's the point you're trying to make, I agree that a lot of programmers seem to think they don't make mistakes, and thus might not be receptive to the siren song of referential transparency :-)
What I wanted to say is that focusing on referential transparency will not appeal that much to an imperative programmer especially when he needs to overcome Haskell learning curve. What may appeal, though, are the consequences of it like: * easier to repeat test cases for bugs * easier to do automated tests (like quickcheck) since state space is not that big (provided I count automatic vars on stack/heap as state) * makes laziness to work which allow easier and efficient expression of producer - consumer type of code * easy undo (no need for memento pattern etc) * allows monads to work which adds options like built-in user logging or error recovery * better changes for compilers to find parallelize automatically * safe STM ... and probably a lot of more goodies On the other side there are downsides like what to do instead of reactive GUIs? GUI is a big part for a lot of applications. A lot of efficient algorithms we have are state based. And again probably more. If referential transparency by itself would be that important for imperative languages then it would be already added to IDEs* in some form like a popup menu item with name "check function purity". In some cases you could actually decide that the function is pure (especially if you would go deeper analyzing and annotating your objects with purity flags in your IDE). * and IDEs like visual studio or eclipse are incredibly good compared to the stuff Haskell has; anyway IMHO all IDEs are not good enough - they could be much better Anyway I'm not that qualified to discuss this (I hope this is my last post to this thread :) ). I just wanted to point out what could be a point of view of an imperative programmer based on what I remember from times I was getting more involved with functional programming. The reason I started with functional programming is sure not common - sometimes I may need to actually prove some program features. Peter.

On 24 Jan 2008, at 10:45, Peter Hercek wrote:
* safe STM ... and probably a lot of more goodies
Especially STM would be a good point to elaborate on. Most big systems have issues around concurrency and state modification being broken. Anything that can reliably solve that problem is going to interest serious programmers. Maarten

On Jan 24, 2008 9:45 AM, Peter Hercek
A lot of efficient algorithms we have are state based. And again probably more.
Yes, and we can write those using the ST monad in Haskell. I think it's important to point this out, since some imperative programmers will just go "but what about if I *need* mutable state?" and probably tune out if they don't hear right away that Haskell can give it to them in a safe encapsulated form. -- Sebastian Sylvan +44(0)7857-300802 UIN: 44640862

fewer frustratingly unsolvable bugs down-the-road? When I have bugs in my Haskell programs (and usually I get type errors instead), I've always found them eventually and they're either a silly mistake or I realize that I misunderstood the problem I was trying to solve (it needs to be solved a different way)... which is great, being able to realize that and rewrite things! Usually not everything has to be rewritten because Haskell is pretty modular (unless used poorly :-). ~Isaac

On Thu, 2008-01-24 at 10:45 +0100, Peter Hercek wrote:
Tim Chevalier wrote:
On 1/23/08, Peter Hercek
wrote: Other things did not seem that great for me from the beginning. For example: referential transparency - just enforces what you can take care not to do yourself
...if you never make mistakes, that is.
(e.g. in C# you just cannot be sure some function is referentially transparent even when comment claims so - which of course sucks because programmers are not disciplined).
But if that's the point you're trying to make, I agree that a lot of programmers seem to think they don't make mistakes, and thus might not be receptive to the siren song of referential transparency :-)
What I wanted to say is that focusing on referential transparency will not appeal that much to an imperative programmer especially when he needs to overcome Haskell learning curve. What may appeal, though, are the consequences of it like: * easier to repeat test cases for bugs * easier to do automated tests (like quickcheck) since state space is not that big (provided I count automatic vars on stack/heap as state) * makes laziness to work which allow easier and efficient expression of producer - consumer type of code * easy undo (no need for memento pattern etc) * allows monads to work which adds options like built-in user logging or error recovery * better changes for compilers to find parallelize automatically * safe STM ... and probably a lot of more goodies
On the other side there are downsides like what to do instead of reactive GUIs? GUI is a big part for a lot of applications. A lot of efficient algorithms we have are state based. And again probably more.
If referential transparency by itself would be that important for imperative languages then it would be already added to IDEs* in some form like a popup menu item with name "check function purity". In some cases you could actually decide that the function is pure (especially if you would go deeper analyzing and annotating your objects with purity flags in your IDE).
Doing it in the IDE would a) require much more from most IDEs and b) be almost entirely useless. Most IDEs don't even get as far as parsing the code, even the the best rarely know much about the actual semantics of the language. This would require a rather deep analysis and ultimately it is undecidable. Practically speaking, having such a feature in the IDE would be useless unless the programming style of most "imperative" programmers changed dramatically. The only functions such an analysis would say were pure are those that were rather trivial. Either way, having such a feature in the IDE doesn't really help. A purity checker in the IDE isn't going to help when the function/method is unknown, e.g. when I write a function/method that takes a function or an object. A "purity annotation" would have to be at the language level, short of doing a whole-program analysis which would be infeasible.

On Thu, 24 Jan 2008 10:29:23 -0600
Derek Elkins
Doing it in the IDE would a) require much more from most IDEs and b) be almost entirely useless. Most IDEs don't even get as far as parsing the code, even the the best rarely know much about the actual semantics of the language. This would require a rather deep analysis and ultimately it is undecidable. Practically speaking, having such a feature in the IDE would be useless unless the programming style of most "imperative" programmers changed dramatically. The only functions such an analysis would say were pure are those that were rather trivial. Either way, having such a feature in the IDE doesn't really help. A purity checker in the IDE isn't going to help when the function/method is unknown, e.g. when I write a function/method that takes a function or an object. A "purity annotation" would have to be at the language level, short of doing a whole-program analysis which would be infeasible.
Indeed - JML (Java Modelling Language) takes exactly this approach. -- Robin

On Jan 24, 2008 1:45 AM, Peter Hercek
[...] On the other side there are downsides like what to do instead of reactive GUIs? GUI is a big part for a lot of applications. [...]
GUIs can be expressed in a *much* more direct and modular way in functional programming than imperative programming. See http://haskell.org/haskellwiki/TV and http://haskell.org/haskellwiki/Phooeyfor examples. No inversion of control is necessary. And not just for GUIs, but for reactive systems in general. See http://haskell.org/haskellwiki/Reactive and http://haskell.org/yampa/ . I'd cite easy & modular GUIs as a strong advantage of functional programming. Just to be clear, I mean really *functional* programming, not imperative programming in Haskell (IO). - Conal

Am Freitag, 25. Januar 2008 03:35 schrieb Conal Elliott:
[…]
See http://haskell.org/haskellwiki/Reactive and http://haskell.org/yampa/ .
Or better http://haskell.org/haskellwiki/Functional_Reactive_Programming which has come into existence recently.
[…]
Best wishes, Wolfgang

On Jan 23, 2008 5:29 AM, Simon Peyton-Jones
Friends
Over the next few months I'm giving two or three talks to groups of *non* functional programmers about why functional programming is interesting and important. If you like, it's the same general goal as John Hughes's famous paper "Why functional programming matters".
Audience: some are technical managers, some are professional programmers; but my base assumption is that none already know anything much about functional programming.
Now, I can easily rant on about the glories of functional programming, but I'm a biased witness -- I've been doing this stuff too long. So this message is ask your help, especially if you are someone who has a somewhat-recent recollection of realising "wow, this fp stuff is so cool/useful/powerful/etc".
I'm going to say some general things, of course, about purity and effects, modularity, types, testing, reasoning, parallelism and so on. But I hate general waffle, so I want to give concrete evidence, and that is what I particularly want your help with. I'm thinking of two sorts of "evidence":
1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good. One of my favourite examples is quicksort, for reasons explained here: http://haskell.org/haskellwiki/Introduction#What.27s_good_about_functional_p...
But I'm sure that you each have a personal favourite or two. Would you like to send them to me, along with a paragraph or two about why you found it compelling? For this purpose, a dozen lines of code or so is probably a maximum.
2. War stories from real life. eg "In company X in 2004 they rewrote their application in Haskell/Caml with result Y". Again, for my purpose I can't tell very long stories; but your message can give a bit more detail than one might actually give in a presentation. The more concrete and specific, the better. E.g. what, exactly, about using a functional language made it a win for you?
If you just reply to me, with evidence of either kind, I'll glue it together (regardless of whether I find I can use it in my talks), and put the result on a Wiki page somewhere. In both cases pointers to blog entries are fine.
Quite a lot of this is FP-ish rather than Haskell-ish, but I'm consulting the Haskell mailing lists first because I think you'll give me plenty to go on; and because at least one of the talks *is* Haskell-specific. However, feel free to reply in F# or Caml if that's easier for you.
Thanks!
Simon _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
I'm still just learning haskell but maybe as a n00b I can give you some insight into what I think is important. I will take a guess here and say most of your audience is from the object-oriented crowd. Their software engineering practices are probably entirely based upon the idea of wrapping state up in objects and passing them around. They're probably going to want ways to leverage these techniques without dropping everything. I personally think it is neat that non-functional languages are starting to borrow many ideas from functional languages. C# has lambda and LINQ, java might be adding closures. Scala is functional but has access to all the goodies of the java library. Python has list comprehensions. Even c++ is going to be adding lambda expressions (which are really handy for the stl algos which are functional like themselves). Error handling and QA are very important in the real world. It might not hurt to show a few simple quick check examples and cases where errors are caught at compile time. There are probably many examples in ghc development. -- Ryan Dickie

This is pure "general waffle", but I saw the following comment on reddit.com which impressed me: "C isn't hard; programming in C is hard. On the other hand: Haskell is hard, but programming in Haskell is easy." Mike Simon Peyton-Jones wrote:
Friends
Over the next few months I'm giving two or three talks to groups of *non* functional programmers about why functional programming is interesting and important. If you like, it's the same general goal as John Hughes's famous paper "Why functional programming matters".
Audience: some are technical managers, some are professional programmers; but my base assumption is that none already know anything much about functional programming.
Now, I can easily rant on about the glories of functional programming, but I'm a biased witness -- I've been doing this stuff too long. So this message is ask your help, especially if you are someone who has a somewhat-recent recollection of realising "wow, this fp stuff is so cool/useful/powerful/etc".
I'm going to say some general things, of course, about purity and effects, modularity, types, testing, reasoning, parallelism and so on. But I hate general waffle, so I want to give concrete evidence, and that is what I particularly want your help with. I'm thinking of two sorts of "evidence":
1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good. One of my favourite examples is quicksort, for reasons explained here: http://haskell.org/haskellwiki/Introduction#What.27s_good_about_functional_p...
But I'm sure that you each have a personal favourite or two. Would you like to send them to me, along with a paragraph or two about why you found it compelling? For this purpose, a dozen lines of code or so is probably a maximum.
2. War stories from real life. eg "In company X in 2004 they rewrote their application in Haskell/Caml with result Y". Again, for my purpose I can't tell very long stories; but your message can give a bit more detail than one might actually give in a presentation. The more concrete and specific, the better. E.g. what, exactly, about using a functional language made it a win for you?
If you just reply to me, with evidence of either kind, I'll glue it together (regardless of whether I find I can use it in my talks), and put the result on a Wiki page somewhere. In both cases pointers to blog entries are fine.
Quite a lot of this is FP-ish rather than Haskell-ish, but I'm consulting the Haskell mailing lists first because I think you'll give me plenty to go on; and because at least one of the talks *is* Haskell-specific. However, feel free to reply in F# or Caml if that's easier for you.
Thanks!
Simon _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Jan 23, 2008 8:29 AM, Simon Peyton-Jones
Friends
Over the next few months I'm giving two or three talks to groups of *non* functional programmers about why functional programming is interesting and important. If you like, it's the same general goal as John Hughes's famous paper "Why functional programming matters".
Audience: some are technical managers, some are professional programmers; but my base assumption is that none already know anything much about functional programming.
Now, I can easily rant on about the glories of functional programming, but I'm a biased witness -- I've been doing this stuff too long. So this message is ask your help, especially if you are someone who has a somewhat-recent recollection of realising "wow, this fp stuff is so cool/useful/powerful/etc".
I'm going to say some general things, of course, about purity and effects, modularity, types, testing, reasoning, parallelism and so on. But I hate general waffle, so I want to give concrete evidence, and that is what I particularly want your help with. I'm thinking of two sorts of "evidence":
I'm still very much a newbie, but the one thing that struck me as the best feature coming from Python is the static typing. Changing the type of a function in Python will lead to strange runtime errors that take some work to debug, whereas, when I tinker with a program in Haskell, I already know it will work once it compiles. -Yaakov

Yaakov Nemoy wrote:
I'm still very much a newbie, but the one thing that struck me as the best feature coming from Python is the static typing. Changing the type of a function in Python will lead to strange runtime errors that take some work to debug, whereas, when I tinker with a program in Haskell, I already know it will work once it compiles.
I'm quite new to Haskell as well and I must echo this sentiment. The mainstream has somewhat realized that its a waste of time to tell the compiler the type of _everything_. Learning Haskell has completely reversed my feeling that static typing is an old outdated idea. The power of Haskell's type system makes it feel like you are programming in a dynamic language to some degree, yet all of it is type-checked, and that is just *really* cool. Honestly, when I first started reading a Haskell tutorial I was convinced that it was an runtime-typed language like Python. When the tutorial moved on to explain that it is statically typed I could barely believe it. The second thing I might want to highlight is the power of monads and other techniques like arrows and FRP. Granted, these are not easy concepts to impart in a talk, but I think they were really an important discovery in FP as they deal with the "problem" of IO in such a beautifully orthogonal way. Following on that, more advanced structures like arrows and how all of this can contribute to really powerful DSLs I think is a pretty big selling point. Actually, come to think of it, one great way to show off the language is to show off the power of some of the libraries that have been written. For example, show how easily a parser can be created w/ Parsec; or show an example of FRP w/ Yampa. These examples will likely tantalize the programmers in the audience to want to learn more. Good luck! /mike.

Michael Reid wrote:
The power of Haskell's type system makes it feel like you are programming in a dynamic language to some degree, yet all of it is type-checked, and that is just *really* cool.
to some degree, (in current Haskell compilers), it *is* more like a dynamic than a static language: except when optimized away, values of all types are represented by a pointer to their actual value. (this helps with parametric polymorphism and laziness (take :: Int -> [a] -> [a]).) (at least this is a difference compared to C++) ~Isaac

On Sat, 2008-01-26 at 20:49 -0500, Isaac Dupree wrote:
Michael Reid wrote:
The power of Haskell's type system makes it feel like you are programming in a dynamic language to some degree, yet all of it is type-checked, and that is just *really* cool.
to some degree, (in current Haskell compilers), it *is* more like a dynamic than a static language: except when optimized away, values of all types are represented by a pointer to their actual value. (this helps with parametric polymorphism and laziness (take :: Int -> [a] -> [a]).) (at least this is a difference compared to C++)
Boxing is orthogonal to dynamic/static. You can have boxing in statically typed languages, e.g. Haskell, Java, C# and you can have unboxed values in dynamically typed languages.

Derek Elkins wrote:
On Sat, 2008-01-26 at 20:49 -0500, Isaac Dupree wrote:
Michael Reid wrote:
The power of Haskell's type system makes it feel like you are programming in a dynamic language to some degree, yet all of it is type-checked, and that is just *really* cool. to some degree, (in current Haskell compilers), it *is* more like a dynamic than a static language: except when optimized away, values of all types are represented by a pointer to their actual value. (this helps with parametric polymorphism and laziness (take :: Int -> [a] -> [a]).) (at least this is a difference compared to C++)
Boxing is orthogonal to dynamic/static. You can have boxing in statically typed languages, e.g. Haskell, Java, C#
yes I know, although we are talking about communicating to people who don't necessarily know as much as we do.
and you can have unboxed values in dynamically typed languages.
really? Sure that's possible as an optimization, but I thought that to explicitly specify that would require a known static type. Or perhaps the bit-"tagging" by which some Scheme implementations are able to hold small integers without a pointer (IIRC)? ~Isaac

Derek Elkins wrote:
and you can have unboxed values in dynamically typed languages.
really? Sure that's possible as an optimization, but I thought that to explicitly specify that would require a known static type. Or perhaps the bit-"tagging" by which some Scheme implementations are able to hold small integers without a pointer (IIRC)? That's exactly what Ruby does - symbols, Fixnums (integers that fit into a single machine word), bools and nil are unboxed, distinguished from boxed types by the lowest 2 bits. That's undistinguishable at runtime,
Isaac Dupree wrote: though - everything Just Works as though they're ordinary objects, including flipping integer class to Bignum (which is boxed) as necessary. -- Alex

Simon Peyton-Jones wrote:
1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good. Here is one I came across in the last few days. I was reviewing some code in Java, and it contained a function that looked through a list of Foo instances for the latest update. I don't actually speak Java, but it went something like this:
// Get the Foo that was most recently updated. Foo latestUpdate (Iterator <Foo> iterator) { long latestTimeSoFar = 0; Foo latestFooSoFar = Null; while (iterator.hasNext()) { item = iterator.getNext(); if (item.updateTime > latestTimeSoFar) { latestTimeSoFar = item.updateTime; latestFooSoFar = item; } } return latestFooSoFar; } This takes an iterator over some collection of Foos and finds the one with the highest value of updateTime. 9 lines of code, or 12 with the closing curly brackets. In Haskell this is so short and obvious you probably wouldn't bother declaring it as a function, but if you did, here it is: -- Find the Foo that was most recently updated. latestUpdate :: [Foo] -> Foo latestUpdate foos = maximumBy (comparing updateTime) foos Of course you could always write it in point-free format, but I think that would be over-egging things. Paul.

On Thu, 2008-01-24 at 21:11 +0000, Paul Johnson wrote: [snip]
// Get the Foo that was most recently updated. Foo latestUpdate (Iterator <Foo> iterator) { [...] }
This takes an iterator over some collection of Foos and finds the one with the highest value of updateTime. 9 lines of code, or 12 with the closing curly brackets.
In Haskell this is so short and obvious you probably wouldn't bother declaring it as a function, but if you did, here it is:
-- Find the Foo that was most recently updated. latestUpdate :: [Foo] -> Foo latestUpdate foos = maximumBy (comparing updateTime) foos
Of course you could always write it in point-free format, but I think that would be over-egging things.
To have a fair comparison you'd have to use some typeclass (Foldable would be right?) that supports different types of collections.
import Data.Foldable as F
latestUpdate :: Foldable a => [a] -> a latestUpdate = F.maximumBy (comparing updateTime)
But yeah your point is very very valid. I've programmed some Java in my days and while i wouldn't call myself an expert i don't see any immediate "unawkward" abstractions that can be performed in the java code. If you have all Foo's implement a method "getProperty(String)" that takes a String and returns the property of that class instance you _could_ implement the static method (correct java-lingo?) maximumBy(String s) that is roughly like this: class MyClass { public static Foo maximumBy (Iterator <Foo> iterator, String s) { long max = 0; Foo maxFoo = Null; while (iterator.hasNext()) { item = iterator.getNext(); if (item.getProperty(s).compareTo(max)) { max = item.updateTime; maxFoo = item; } } return maxFoo; } } and then Foo latestUpdate (Iterator <Foo> i) { return MyClass.maximumBy(i,"latestUpdate"); } Totally untested code ofcourse... The awkward part obviously comes from the fact that properties aren't functions that you can pass to the maximumBy-method. Anyhow, Haskell's nice Polymorphism is surely a good argument for it (but not for functional programming per se right?) Mattias

This takes an iterator over some collection of Foos and finds the one with the highest value of updateTime. 9 lines of code, or 12 with the closing curly brackets.
In Haskell this is so short and obvious you probably wouldn't bother declaring it as a function, but if you did, here it is:
-- Find the Foo that was most recently updated. latestUpdate :: [Foo] -> Foo latestUpdate foos = maximumBy (comparing updateTime) foos
Of course you could always write it in point-free format, but I think that would be over-egging things.
Java's just wordy like that. In python you'd say max(foos, key=lambda x: x.update_time). Python / perl / ruby / smalltalk have had first class functions forever, so those are basically already in the mainstream. They may impress a java or C programmer who's never seen a dynamic language. It might reassure a python programmer that static typing doesn't preclude using closures and doesn't mean c++ or java style huge long declarations. So you probably need some other examples to convince python programmers that their language didn't just cherry pick all the great ideas and that there are none left.
Well... ghc still has a single-threaded garbage collector, so all the "par" threads must stop for garbage collection. So scaling to the level of a cluster would be significantly sub-linear.
A real time incremental gc would be really cool. Some people claim they exist, but which languages have one?

On Jan 24, 2008, at 6:04 PM, Evan Laforge wrote:
Well... ghc still has a single-threaded garbage collector, so all the "par" threads must stop for garbage collection. So scaling to the level of a cluster would be significantly sub-linear.
A real time incremental gc would be really cool. Some people claim they exist, but which languages have one?
Define "real time". I'll note that, after all the mud that's been slung at Java, you've been able to get low-pause-time parallel GC in Java for years and years, and can get "real time" GC for various of your favorite definitions of that term if you're willing to look a little. Relatively few other language implementations can claim the same. Writing a decent parallel GC (eg, faster than the tuned sequential GC it's replacing on 2 or more CPUs) is hard. Sounds like GHC is nearly there, though. GC implementors dream of systems where read barriers are nearly free. Better still, everything that's been learned about and published in Java-land carries across to Haskell (though the tradeoffs in eg mutation behavior are often different). -Jan-Willem Maessen
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

A real time incremental gc would be really cool. Some people claim they exist, but which languages have one?
Define "real time". I'll note that, after all the mud that's been slung at Java, you've been able to get low-pause-time parallel GC in Java for years and years, and can get "real time" GC for various of your favorite definitions of that term if you're willing to look a little.
I'd personally settle for soft real time, like "very rarely takes more than n ms in a single gc run" so that if you are calculating samples or something chances of getting a dropout are low. I think most people would be happy there. So Java sounds like it would do for me there. Unfortunately it has all the rest of that language attached to it :)

On Jan 24, 2008, at 6:04 PM, Evan Laforge wrote:
Java's just wordy like that. In python you'd say max(foos, key=lambda x: x.update_time). Python / perl / ruby / smalltalk have had first class functions forever, so those are basically already in the mainstream. They may impress a java or C programmer who's never seen a dynamic language. It might reassure a python programmer that static typing doesn't preclude using closures and doesn't mean c++ or java style huge long declarations.
So you probably need some other examples to convince python programmers that their language didn't just cherry pick all the great ideas and that there are none left.
Actually, citing the degree to which python, perl, ruby etc. have adopted some of the basics of functional programming seems like a clear win. The popularity of a book such as "Higher Order Perl" almost begs the question -- how much better would it be if folks were doing this in a language designed to make such things efficient from the get-go? While I'm not knowledgeable enough to give the inside scoop about Goldman, they're a good example in that they're doing some very functional stuff, especially frp-related for lots of their main worldwide pricing systems, but mainly doing this in languages that aren't purely functional. Here's one blog post that at least demonstrates that this is happening: http://www.kimbly.com/blog/ 000505.html As I understand it, since they're working on a massively distributed architecture (literally worldwide), functional techniques are not only a huge win, but unavoidable. Along the lines of citing "Higher Order Perl" the popularity of C+ + template metaprogramming might be a neat way to make the pitch as well. Drawing out how template metaprogramming is a form of functional programming might be a bit much for the talk. But, still, briefly mentioning that is widely recognized as helping to make things clean and efficient, although in a massively verbose and sometimes confusing way, and that more purely functional languages can do the same thing in a nicer way might again be a way to make the point. The neatest piece of simple Haskell code that I can think of, by the way, is the oft-cited Hamming sequence. There's so much going on in it, the manner it exploits laziness and infinite streams, and so forth. --S

On 24 Jan 2008, at 3:04 PM, Evan Laforge wrote:
This takes an iterator over some collection of Foos and finds the one with the highest value of updateTime. 9 lines of code, or 12 with the closing curly brackets.
In Haskell this is so short and obvious you probably wouldn't bother declaring it as a function, but if you did, here it is:
-- Find the Foo that was most recently updated. latestUpdate :: [Foo] -> Foo latestUpdate foos = maximumBy (comparing updateTime) foos
Of course you could always write it in point-free format, but I think that would be over-egging things.
Java's just wordy like that. In python you'd say max(foos, key=lambda x: x.update_time). Python / perl / ruby / smalltalk have had first class functions forever, so those are basically already in the mainstream.
But, while Python/Perl/Ruby/Smalltalk may have borrowed such techniques already, the syntax still conspires against them. If the audience knows one or more of these languages, I would suggest finding its syntactic infelicities and contrasting with Haskell (Haskell's syntax is its strongest point, IMHO). jcc

On 25.01.2008, at 00:04, Evan Laforge wrote:
Well... ghc still has a single-threaded garbage collector, so all the "par" threads must stop for garbage collection. So scaling to the level of a cluster would be significantly sub-linear.
A real time incremental gc would be really cool. Some people claim they exist, but which languages have one?
james mccartney's supercollider [1] has a non-copying incremental collector based on [2], though not a parallel one. btw, is an implementation of the incremental collector described in [3] available somewhere? are there any plans to incorporate it into future ghc versions? <sk> [1] http://supercollider.sourceforge.net [2] P. R. Wilson and M. S. Johnstone. Real-time non-copying garbage collection. In ACM OOPSLA Wsorkshop on Memory Management and Garbage Collection, 1993. [3] A. M. Cheadle, A. J. Field, S. Marlow, S. L. P. Jones, and R. L. While. Exploring the barrier to entry: incremental generational garbage collection for haskell. In ISMM ’04: Proceedings of the 4th international symposium on Memory management, pages 163–174, New York, NY, USA, 2004. ACM.

Hi Stefan,
A real time incremental gc would be really cool. Some people claim they exist, but which languages have one?
james mccartney's supercollider [1] has a non-copying incremental collector based on [2], though not a parallel one.
btw, is an implementation of the incremental collector described in [3] available somewhere? are there any plans to incorporate it into future ghc versions?
<sk>
[1] http://supercollider.sourceforge.net [2] P. R. Wilson and M. S. Johnstone. Real-time non-copying garbage collection. In ACM OOPSLA Wsorkshop on Memory Management and Garbage Collection, 1993. [3] A. M. Cheadle, A. J. Field, S. Marlow, S. L. P. Jones, and R. L. While. Exploring the barrier to entry: incremental generational garbage collection for haskell. In ISMM ’04: Proceedings of the 4th international symposium on Memory management, pages 163–174, New York, NY, USA, 2004. ACM.
The collector in [3] isn't in a production version of GHC for several reasons, mainly because I finished it prior to significant reworking of GHC's backend (C-- target etc), and while finishing my PhD I couldn't devote the time to keeping it current - it is a massive engineering task! Furthermore, the scheduling of work-based increments are problematic and I only made a preliminary start with time-based collection. That said: Our current work here at Imperial has focussed on implementing exactly what we did for GHC in Java, targetting the Jikes RVM, and we've made alot of progress (*shameless plug*, check out our paper at VEE 2008: /A Method Specialisation and Virtualised Execution Environment for Java). /With this work nearing completion my attention is turning back to GHC but with particular focus on concurrency, and multi-core / parallel collection and the STM world... I haven't ruled out the collector being incremental too, so I wouldn't be surprised if I resurrect this code base... Of course, these things take time and I have other commitments... I know Simon has been working on parallel collection and I would expect this to be more generally useful than a single-threaded incremental collector, so I'll leave it to him to say how that's coming on... Cheers Andy // -- ********************************************************************* * Andrew Cheadle email: a.cheadle@doc.ic.ac.uk * * Department of Computing http://www.doc.ic.ac.uk/~amc4/ * * Imperial College London * *********************************************************************

See also A parallel, real-time garbage collector Perry Cheng and Guy Blelloch PLDI 2001 This was implemented in the TILT compiler for SML (which, to be fair, is more of a research vehicle than a programmer-friendly implementation). -Dan On Jan25, Stefan Kersten wrote:
On 25.01.2008, at 00:04, Evan Laforge wrote:
Well... ghc still has a single-threaded garbage collector, so all the "par" threads must stop for garbage collection. So scaling to the level of a cluster would be significantly sub-linear.
A real time incremental gc would be really cool. Some people claim they exist, but which languages have one?
james mccartney's supercollider [1] has a non-copying incremental collector based on [2], though not a parallel one.
btw, is an implementation of the incremental collector described in [3] available somewhere? are there any plans to incorporate it into future ghc versions?
<sk>
[1] http://supercollider.sourceforge.net [2] P. R. Wilson and M. S. Johnstone. Real-time non-copying garbage collection. In ACM OOPSLA Wsorkshop on Memory Management and Garbage Collection, 1993. [3] A. M. Cheadle, A. J. Field, S. Marlow, S. L. P. Jones, and R. L. While. Exploring the barrier to entry: incremental generational garbage collection for haskell. In ISMM ’04: Proceedings of the 4th international symposium on Memory management, pages 163–174, New York, NY, USA, 2004. ACM.
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Evan Laforge wrote:
Java's just wordy like that. In python you'd say max(foos, key=lambda x: x.update_time).
While this is true, I was also thinking of the typical audience SPJ specified: senior technical people and managers. Most of these people have heard of Python and Ruby, but see them as "scripting" languages that are not suitable for Real Work. Most of the ones I have met seem to regard a static type system and compilation to native code as prerequisites for "real" programming languages (although Java has greatly weakened the second prejudice). More importantly, these people have not read "Beating the Averages". They see the programming language market as a Keynsian Beauty Contest, in which the judges get rewarded for voting for the most popular candidate. Therefore the trick to winning the contest is not persuading the judges that you are the most beautiful, but that you are the one that most of the other judges are going to vote for. Paradoxically this is best done by emphasising your similarity with what they already know. The message is "I'm just like her, only better". In the case of Haskell ways to do this would include: * Outline (single Powerpoint slide) a mapping between UML class diagrams and Haskell data and class types. Mention that Haskell classes and data types correspond roughly to Java interface and implementation classes respectively. * Emphasise that Haskell *is* statically type checked, its just that type inference means you don't have to declare the type of every single thing. Repeat this point three or four times during the talk to make sure it sticks. * Mention the covariance type hole in Java and say that Haskell doesn't have it. In Haskell "static type system" means what it says. * Hackage does what Doxygen and Javadoc do. The ones who know about automatic documentation will get a warm fuzzy feeling. The ones who don't will find it interesting. Also mention Hunit. Mention QuickCheck in passing, but don't dwell on it because its not what they know. * Mention that there is an Eclipse plug-in for Haskell. * Talk at length about the Haskell library and package mechanisms. Large projects often have multi-version configuration management issues, so talk about how library versions can be selected at compile time by a configuration file. * Talk about the FFI interface. Most of these people have big important monoliths of legacy code that they don't like to talk about in public. * Trying to dereference "null" is a significant source of bugs, so explain that null references are impossible in Haskell. If you have "x :: Foo" then "x" can never be null; it *has* to refer to a Foo object. If we want a possible null value then we say "Maybe Foo" to signal this, and type safety means you have to account for possible "Nothing" values. * Say "computers are cheap but programmers are expensive" whenever explaining a correctness or productivity feature. The best way to demonstrate the innate productivity improvement in Haskell is using numbers. Haskell projects to re-implement existing code routinely report 4-fold or better code reduction. The 1993 paper on Ada vs Awk vs C++ vs Haskell... is still good ammunition, as are the GetOpts implementations in C and Haskell. A lot of this is to demonstrate that the normal development infrastructure is actually available. This is another thing that these people expect to see in a "real" language. Its not that this is important for productivity, its just stuff you have to have to be considered a "real" language. Maybe I should write a "Real Programming Languages Eat Lots of Quiche" piece. Paul.

On 1/26/08, Paul Johnson
* Say "computers are cheap but programmers are expensive" whenever explaining a correctness or productivity feature.
This is true only if talking to people in high-income nations. Cheers, Tim -- Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in doubt "We should not be simply fighting evil in the name of good, but struggling against the certainties of people who claim always to know where good and evil are to be found." -- Tzvetan Todorov

Tim Chevalier wrote:
On 1/26/08, Paul Johnson
wrote: * Say "computers are cheap but programmers are expensive" whenever explaining a correctness or productivity feature.
This is true only if talking to people in high-income nations.
Even in low-income nations, its only false in the short term. If you have skilled programmers with computers and Internet connections then their wages inflate to the world norm. IIRC India is seeing 20%/year wage inflation for comp-sci graduates. And thats without the impact of H1-B and related programmes around the world. Paul.

On 1/26/08, Paul Johnson
Tim Chevalier wrote:
This is true only if talking to people in high-income nations.
Even in low-income nations, its only false in the short term. If you have skilled programmers with computers and Internet connections then their wages inflate to the world norm. IIRC India is seeing 20%/year wage inflation for comp-sci graduates. And thats without the impact of H1-B and related programmes around the world.
It's true that India seems to be going in that direction, but personally I don't feel I have the background or temerity to suggest that it will definitely happen for the rest of the world. Cheers, Tim -- Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in doubt "We're born, we live for a brief instant, and we die. Technology is not changing it much, if at all."--Steve Jobs

Tim Chevalier/Paul Johnson about "cheap computers, expensive programmers"
This is true only if talking to people in high-income nations.
Even in low-income nations, its only false in the short term. If you have skilled programmers with computers and Internet connections then their wages inflate to the world norm. IIRC India is seeing 20%/year wage inflation...
It's true that India seems to be going in that direction, but personally I don't feel I have the background or temerity to suggest that it will definitely happen for the rest of the world.
The issue is less related to the actual income, but to the global politics, sometimes doctrinal. Not always the "invisible hand" of market may easily change things, and if a given nation/country has historical strong views on the "power of the people", the evolution is different than at your place. India doesn't seem to boast that they are numerous and powerful. Chinese do... We shall see. You may perhaps remember (which you won't, because you are too young) the glorious times when computers became a reality even in Soviet Union. They had at that time plenty of really good mathematicians. But the totalitarian view of the science, plus the nationalistic proudness, made them (the rulers not the scientists...) think and say that with so many good people, there is no need to develop the programming automated tools. They neglected the programming languages. Russia and their satellites became a kind of desert here not only because of economical problems... Jerzy Karczmarczuk

Jerzy, this is a very interesting point you bring up, from my perspective. I should point out that certain US-trained mathematicans (myself included) are actually quite jealous of the Russian math education system - they produce mathematicians who tend to be excellent in depth and breadth, who are both great computationally and in terms of facility with abstract formalism. Kontsevich, Drinfel'd, Gromov, Givental, Beilinson, the whole Gelfan'd school - these people are incredible on all fronts. When I was an undergrad math major in the US, there was a clear culture of valueing proofs over computation. Integrals were sneered at; the more abstract the argument, the more representative of "true mathematics" it surely must be. Now that I'm older I recognize that this is a special case of the teenager's way of declaring himself special: "I'm *better* than those idiot physicists who are so trivial as to care about integrals". Could be a recent convert to jazz talking about rock, or whatever. Anyway, no we're older, and we realize that it would have helped our math understanding out quite a bit had we learned more physics, engineering, etc. Or had we learned 19th century mathematics well. The Russian program seems to do this, actually (at least for the sample set of kids that make it to the US). What you're telling me below is that part of this emphasis on old-world mathematics might have come from an arrogance/bias against computers? Interesting - I'll have to think about this. I've often heard from my Eastern European colleagues that they learned almost nothing about computer science back home... On Sat, 26 Jan 2008, jerzy.karczmarczuk@info.unicaen.fr wrote:
You may perhaps remember (which you won't, because you are too young) the glorious times when computers became a reality even in Soviet Union. They had at that time plenty of really good mathematicians. But the totalitarian view of the science, plus the nationalistic proudness, made them (the rulers not the scientists...) think and say that with so many good people, there is no need to develop the programming automated tools. They neglected the programming languages. Russia and their satellites became a kind of desert here not only because of economical problems...
Jerzy Karczmarczuk

Dipankar Ray writes:
I should point out that certain US-trained mathematicans (myself included) are actually quite jealous of the Russian math education system - they produce mathematicians who tend to be excellent...
Anyway, no we're older, and we realize that it would have helped our math understanding out quite a bit had we learned more physics, engineering, etc. Or had we learned 19th century mathematics well. The Russian program seems to do this, actually (at least for the sample set of kids that make it to the US).
What you're telling me below is that part of this emphasis on old-world mathematics might have come from an arrogance/bias against computers? Interesting - I'll have to think about this.
I've often heard from my Eastern European colleagues that they learned almost nothing about computer science back home... ===
Well, I have the impression, at least I intended to say just the reverse (not the opposite), that the arrogance/bias against computers has been partly "justified" by a very good level in math. The decision makers confounded the math science with the domain of computation... [[Let's skip the ideological war against cybernetics as the reactionary pseudo-science which would enslave the proletarian class. End of '50, beginning of '60 this was already completely anecdotical, although some bitter aftertaste persisted...]] The stagnation in the development of automatics followed the same pattern. "We have good, brave people. Who needs 'em robots"... It is an *established fact* that some part of casualties during the struggle to confine Tchernobyl could have been avoided, if the authorities thought more about replacing humans with machines. But not only the authorities, folks themselves rushed to help, and safety measures were not respected as they should have been if a more "liberal" doctrine, with calculation of risk "à l'Américaine", prevailed. And, PLEASE, Artem V. Andreev, before you say plainly again that I am "definitely wrong". I didn't invent what I say, and I hope nobody can accuse me of any inimical thoughts against Russians. You say:
Not wishing to refute your general point, I can only note that U.S.S.R did have its own school of computer science in general, and of developing programming language implementations in particular. There were Fortran and Algol compilers, there is Refal, after all, which is a purely Soviet invention (and which, for that matter, is still being taught in several Russian universities). So in this particular respect you are definitely wrong.
Please compare the size of the country, the achievements in military equipment, cosmic exploration, etc., with what we could have seen in the development of software... An Algol compiler has even been built in my Poland, much smaller and miserable. Nothing to be proud of. My goodness, Refal... You mean the Turchin stuff? OK, OK... Nice guy, *really*. Nice idea. And no consequences... Nothing bad can I say, because I know a tiny bit about the related stuff. The language Snobol (Griswold et al.), was also based on Markov algorithms (and some centuries ago I wrote a small textbook on Snobol, I planned to teach at that time in Poland. I knew that Refal existed, and... I couldn't learn anything more). I wonder how much could do Markov himself (1903 - 1979), - his: "Theory of Algorithms" was published by the American Mathematical Society Translations in 1960, - if he lived in a more open society, which could *better* exploit the potential of these people. Do you think that I haven't heard about A.P. Yershov? ACM still cites him, his papers on the system ALPHA (JACM 1966), programming of arith. ops. (CACM 1958), etc. Some other names deserve mentioning as well. But what the system did, cannot be defended. This "School of computer science" gave some theory to the humanity. But no, or almost no software, sorry. I *sincerely* hope that it changed now. Jerzy Karczmarczuk

jerzy.karczmarczuk@info.unicaen.fr writes:
And, PLEASE, Artem V. Andreev, before you say plainly again that I am "definitely wrong". I didn't invent what I say, and I hope nobody can accuse me of any inimical thoughts against Russians. I had not the slightest intention to accuse you of anything. Nor did I want to defend how U.S.S.R treated scientists and all that. But I really wonder where you get it from that in U.S.S.R computer science was treated in any way *differently* from any other branches of science? Hardware engineering -- yep, that was a kind of disaster. Which of course cannot but have absolutely negative impact on programming as well as theoretical computer science. But I have never heard that U.S.S.R authorities banned any kind of software development as such, on the ground that there were enough mathematicians around...
Do you think that I haven't heard about A.P. Yershov? ACM still cites him, his papers on the system ALPHA (JACM 1966), programming of arith. ops. (CACM 1958), etc. Some other names deserve mentioning as well. But what the system did, cannot be defended. This "School of computer science" gave some theory to the humanity. But no, or almost no software, sorry. That's of course true. But that does not mean that no software were being developped; that only means the software did not cross the borders...
-- S. Y. A(R). A.

Hello jerzy, Sunday, January 27, 2008, 1:48:07 AM, you wrote:
I've often heard from my Eastern European colleagues that they learned almost nothing about computer science back home... ===
Well, I have the impression, at least I intended to say just the reverse (not the opposite), that the arrogance/bias against computers has been partly "justified" by a very good level in math. The decision makers confounded the math science with the domain of computation...
i don't think that there were any decisions. Soviet system is just system of government monopolies where each monopoly "works" in the way that is more comfortable for its bureaucrats rather than "users". it's true for factories, communal services, shops, anything. at the beginning of computer era, computational mathematics was the only computer application and our CS started in these fields, like american's one. but then new applications arrived, and western CS was switched to service them, while here professors continued to teach that they know better, push the math into the programs of other institutes and so on. it's no problem for Soviet system that institute prepares specialists for non-existing applications - they should learn yourself. so we just have system of "science for science" without any practical outcome, and contents of this science dictated by professors who was actual 50 years ago -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Hello Dipankar, Sunday, January 27, 2008, 12:16:38 AM, you wrote:
Anyway, no we're older, and we realize that it would have helped our math understanding out quite a bit had we learned more physics, engineering, etc. Or had we learned 19th century mathematics well. The Russian program seems to do this, actually (at least for the sample set of kids that make it to the US).
oh, yes, they are really still study 19th century physics, but not because of great mind, but due to age of university professors. i've studied at Moscow University in 89-91 and department of computer languages still studied Lisp at those times (!). a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Sun, 2008-01-27 at 11:49 +0300, Bulat Ziganshin wrote:
Hello Dipankar,
Sunday, January 27, 2008, 12:16:38 AM, you wrote:
Anyway, no we're older, and we realize that it would have helped our math understanding out quite a bit had we learned more physics, engineering, etc. Or had we learned 19th century mathematics well. The Russian program seems to do this, actually (at least for the sample set of kids that make it to the US).
oh, yes, they are really still study 19th century physics, but not because of great mind, but due to age of university professors. i've studied at Moscow University in 89-91 and department of computer languages still studied Lisp at those times (!). a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
This reminds me, I worked at a Dutch telecomm software production company for a short while in 1999 and they had two Russian software engineers there, one from St. Petersburg and one from Wladiwostok, both female and under 25 years of age. They programmed in C and were highly respected by their managers and colleagues! So, there are at least counterexamples :-) Regards, Hans van Thiel

Hello Hans, Sunday, January 27, 2008, 5:02:57 PM, you wrote:
studied at Moscow University in 89-91 and department of computer languages still studied Lisp at those times (!). a few months ago i
This reminds me, I worked at a Dutch telecomm software production company for a short while in 1999 and they had two Russian software engineers there, one from St. Petersburg and one from Wladiwostok, both female and under 25 years of age. They programmed in C and were highly respected by their managers and colleagues! So, there are at least counterexamples :-)
no. you should asked them HOW they learned programming and i'm pretty sure (knowing too much about our universities and institutes) that they were just a self-learned - like myself. generally speaking, our higher education now just starting to teach students "new" Java technologies - you can imagine how old is knowledge they get there. actually, all the good russian programmers i ever seen are self-learned -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Sun, 27 Jan 2008, Bulat Ziganshin wrote:
Hello Hans,
Sunday, January 27, 2008, 5:02:57 PM, you wrote:
This reminds me, I worked at a Dutch telecomm software production company for a short while in 1999 and they had two Russian software engineers there, one from St. Petersburg and one from Wladiwostok, both female and under 25 years of age. They programmed in C and were highly respected by their managers and colleagues! So, there are at least counterexamples :-)
no. you should asked them HOW they learned programming and i'm pretty sure (knowing too much about our universities and institutes) that they were just a self-learned - like myself. generally speaking, our higher education now just starting to teach students "new" Java technologies - you can imagine how old is knowledge they get there. actually, all the good russian programmers i ever seen are self-learned
Many things I need today for math and computer science I have acquired in an auto-didactic way. These are usually the things I can remember most easily. Isn't it a good education system if it allows, or supports, or encourages or forces you to learn this way? :-)

On Jan 27, 2008 11:49 AM, Bulat Ziganshin
oh, yes, they are really still study 19th century physics, but not because of great mind, but due to age of university professors. i've studied at Moscow University in 89-91 and department of computer languages still studied Lisp at those times (!). a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
I've learned Haskell in MEPhI. And Lisp seems to be really popular now due to Paul Graham. What I really want to mention is that the philosophy of education in Russia consider "modern" irrelevant. "Modern" changes really fast and it is not any better to teach something modern then outdated, except if you are going to create some ballast of Java programmers who will prevent progress and preserve market stability, i. e. really be outdated. What university try to do is teach you how to swim really good, so you'll have a better chances to find out where to swim. I mean they teach you to learn, not anything else. Everything else is just a pretty bonus. -- vir http://vir.comtv.ru/

On Jan 27, 2008 3:49 AM, Bulat Ziganshin
a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
I dunno. Sussman and Abelson are not getting any younger, and neither is Felleisen, but others have taken up that torch. So far, those who waited for Lisp to die out have spent a long time waiting. It has not been a winning bet. -Brian -- Brian T. Sniffen bts@alum.mit.edu or brian.sniffen@gmail.com http://www.evenmere.org/~bts

On Sun, 2008-01-27 at 17:25 -0500, Brian Sniffen wrote:
On Jan 27, 2008 3:49 AM, Bulat Ziganshin
wrote: a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
I dunno. Sussman and Abelson are not getting any younger, and neither is Felleisen, but others have taken up that torch. So far, those who waited for Lisp to die out have spent a long time waiting. It has not been a winning bet.
No language that was ever "popular" has ever died as far as I can tell.

Derek Elkins writes: //Discussion about Lisp in Russia, some people not getting younger, Scheme with types, and other bedlam//
No language that was ever "popular" has ever died as far as I can tell.
This is one of the persistent "truths" which has to be carefully interpreted. Languages mutate and give offsprings, bearing sometimes the same name. The original Fortran is undoubtly dead and buried. Long live Fortran! In some cases people defend this thesis with sophism. Algol is dead. No sense in disputing it. So what? It simply was never "popular enough"... Since the Nature abhorrs vacuum, all niches tend to become non-empty, and for any language, there will be some guys who will play with it. Snobol is alive, APL as well. Perhaps even Simula. When can we say that it is *really* dead? How many users? Languages are "alive" when people who used them are alive, and we didn't have had time enough to kill all of them, patriarchs... Look, Simon Peyton Jones was born more or less simultaneously with Fortran. And he is not so terribly old, is he? (Weeeeelll, perhaps for some of you, but not for me.) Anyway, a language, as any other conceptual structure, can be stored and communicated. You may kill all the "working instances", and rekindle it later. It such a way it is difficult to kill a religion, or a political doctrine. But it may die, become useless/unused temporarily. So, you never really know... Jerzy Karczmarczuk

G'day all. Quoting jerzy.karczmarczuk@info.unicaen.fr:
Algol is dead. No sense in disputing it.
And yet Delphi is still alive. So is Modula-3, though it tends to be referred to as "Java" these days. And, of course, Haskell is ensuring that Miranda will never really die. Cheers, Andrew Bromage

brian.sniffen:
On Jan 27, 2008 3:49 AM, Bulat Ziganshin
wrote: a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
I dunno. Sussman and Abelson are not getting any younger, and neither is Felleisen, but others have taken up that torch. So far, those who waited for Lisp to die out have spent a long time waiting. It has not been a winning bet.
And just as PLT Scheme announces they're moving to immutable, pure lists http://lambda-the-ultimate.org/node/2631 They'll be getting a type system soon, at this rate ;) -- Don

On Sun, 2008-01-27 at 14:30 -0800, Don Stewart wrote:
brian.sniffen:
On Jan 27, 2008 3:49 AM, Bulat Ziganshin
wrote: a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
I dunno. Sussman and Abelson are not getting any younger, and neither is Felleisen, but others have taken up that torch. So far, those who waited for Lisp to die out have spent a long time waiting. It has not been a winning bet.
And just as PLT Scheme announces they're moving to immutable, pure lists http://lambda-the-ultimate.org/node/2631
They'll be getting a type system soon, at this rate ;)
Well we have: "The Design and Implementation of Typed Scheme" very recently http://www.ccs.neu.edu/scheme/pubs/popl08-thf.pdf This is something in the "soft typing" tradition (and uses PLT Scheme as the vehicle.) I believe PLT Scheme already supports a HM typed version of Scheme though primarily for pedagogical purposes if I remember correctly. It is however, unlikely that Scheme will ever be statically typed "by default."

Well, the POPL talk was very pro-types, saying that when you move from a
scripting language to a language to write real systems you need static
types.
On Jan 27, 2008 9:52 PM, Derek Elkins
On Sun, 2008-01-27 at 14:30 -0800, Don Stewart wrote:
brian.sniffen:
On Jan 27, 2008 3:49 AM, Bulat Ziganshin
wrote: a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
I dunno. Sussman and Abelson are not getting any younger, and neither is Felleisen, but others have taken up that torch. So far, those who waited for Lisp to die out have spent a long time waiting. It has not been a winning bet.
And just as PLT Scheme announces they're moving to immutable, pure lists http://lambda-the-ultimate.org/node/2631
They'll be getting a type system soon, at this rate ;)
Well we have: "The Design and Implementation of Typed Scheme" very recently http://www.ccs.neu.edu/scheme/pubs/popl08-thf.pdf This is something in the "soft typing" tradition (and uses PLT Scheme as the vehicle.)
I believe PLT Scheme already supports a HM typed version of Scheme though primarily for pedagogical purposes if I remember correctly.
It is however, unlikely that Scheme will ever be statically typed "by default."
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

You mean as the the POPL paper, http://lambda-the-ultimate.org/node/2622 ?
On Jan 27, 2008 10:30 PM, Don Stewart
And just as PLT Scheme announces they're moving to immutable, pure lists http://lambda-the-ultimate.org/node/2631
They'll be getting a type system soon, at this rate ;)
-- Don _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hello Jerzy and Bulat, Thanks for your perspectives. Bulat, I can understand that you find it shocking that the folks at Moscow University still study Lisp, but I wouldn't be so quick to condemn them for being dinosaurs. After all, they just stopped teaching the SICP course (using Scheme) at MIT, and I don't believe that they replaced it with an intro to CS course that uses (say) Haskell or ML! Nor has Berkeley, far as I know. ...ok, I looked, the MIT intro course is now taught in python. I'll let you decide if that's a step up from scheme. Which brings us back to the topic of the original thread - Simon's request for perspectives. The wonderfulness of advances in type theory these past 20 years, which are appreciated so readily here - they don't seem to have achieved universal acceptance in industry or in academia. What I mean by this is that if I look at the CS programs at Berkeley, MIT, CMU, I don't see a huge emphasis on PL. Looking now at the MIT opencourseware offerings in EECS, I see no undergrad course that suggests that you'd learn anything about modern type theory. Of course we know here of success stories involving modern fp languages. But there is no haskell or ml book that has had close to the influence of K&R's C book. One might argue that adoption on that scale is not the goal of the haskell community (was it Kernighan, Ritchie, or Thompson's goal? I think not), but still, it's weird to me that: 1) we're clearly on to something, but still 2) many smart people who are interested walk away frustrated (not so easy to learn (is the hardness necessary? perhaps?), relative to K&R, for example). 3) most of the canonical US universities for CS (MIT, Berkeley, Stanford, CMU, etc) basically don't teach haskell or ML, or even talk much about it, relative to how much they talk about, say, Java. It's one thing that companies don't move forward; yet another thing that Universities don't either. Why is that? Why, in 2008, is Java taught more than Haskell? On Sun, 27 Jan 2008, Bulat Ziganshin wrote:
Hello Dipankar,
Sunday, January 27, 2008, 12:16:38 AM, you wrote:
Anyway, no we're older, and we realize that it would have helped our math understanding out quite a bit had we learned more physics, engineering, etc. Or had we learned 19th century mathematics well. The Russian program seems to do this, actually (at least for the sample set of kids that make it to the US).
oh, yes, they are really still study 19th century physics, but not because of great mind, but due to age of university professors. i've studied at Moscow University in 89-91 and department of computer languages still studied Lisp at those times (!). a few months ago i have a conversation with today student and they still learn Lisp (!!!). it seems that they will switch to more modern FP languages no earlier that this concrete professor, head of PL department, which in 60s done interesting AI research, will dead, or at least go to the pension
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Jan27, Dipankar Ray wrote:
What I mean by this is that if I look at the CS programs at Berkeley, MIT, CMU, I don't see a huge emphasis on PL. Looking now at the MIT opencourseware offerings in EECS, I see no undergrad course that suggests that you'd learn anything about modern type theory.
3) most of the canonical US universities for CS (MIT, Berkeley, Stanford, CMU, etc) basically don't teach haskell or ML, or even talk much about it, relative to how much they talk about, say, Java.
Not to dispute your general point, but CMU is an exception to this rule. There's a course, taught in SML, on basic functional programming, continuations, laziness, etc: http://www.cs.cmu.edu/~me/212/ This course is required for all CS majors and occurs fairly early in the sequence (freshman spring or sophomore fall). We also have a fairly hardcore introduction to type systems and operational semantics: http://www.cs.cmu.edu/~rwh/courses/ppl/ and a course on constructive logic: http://www.andrew.cmu.edu/course/15-317/ These electives are taken by ~30-40 students each, so I'd guess that somewhere between a third and a half of the undergrads go through one of them. And then there are electives, either mostly undergrad, like this course in typed compilation: http://www.cs.cmu.edu/~crary/hotc/ or mostly grad, like this course on logic programming (but there are always at least a handful of undergrads in the room): http://www.cs.cmu.edu/~fp/courses/lp/ And then there are two grad-level PL distribution-requirement courses (one on type systems, one on denotational semantics), which usually have a few interested undergrads. And many undergrads get involved in PL research as well. Maybe the real question is why so few universities have large groups of PL faculty to teach these courses? =) -Dan

thanks for the correction - very informative! that'll teach me to just go to the opencourseware site at MIT only... On Sun, 27 Jan 2008, Dan Licata wrote:
On Jan27, Dipankar Ray wrote:
What I mean by this is that if I look at the CS programs at Berkeley, MIT, CMU, I don't see a huge emphasis on PL. Looking now at the MIT opencourseware offerings in EECS, I see no undergrad course that suggests that you'd learn anything about modern type theory.
3) most of the canonical US universities for CS (MIT, Berkeley, Stanford, CMU, etc) basically don't teach haskell or ML, or even talk much about it, relative to how much they talk about, say, Java.
Not to dispute your general point, but CMU is an exception to this rule.
There's a course, taught in SML, on basic functional programming, continuations, laziness, etc: http://www.cs.cmu.edu/~me/212/ This course is required for all CS majors and occurs fairly early in the sequence (freshman spring or sophomore fall).
We also have a fairly hardcore introduction to type systems and operational semantics: http://www.cs.cmu.edu/~rwh/courses/ppl/ and a course on constructive logic: http://www.andrew.cmu.edu/course/15-317/ These electives are taken by ~30-40 students each, so I'd guess that somewhere between a third and a half of the undergrads go through one of them.
And then there are electives, either mostly undergrad, like this course in typed compilation: http://www.cs.cmu.edu/~crary/hotc/ or mostly grad, like this course on logic programming (but there are always at least a handful of undergrads in the room): http://www.cs.cmu.edu/~fp/courses/lp/
And then there are two grad-level PL distribution-requirement courses (one on type systems, one on denotational semantics), which usually have a few interested undergrads. And many undergrads get involved in PL research as well.
Maybe the real question is why so few universities have large groups of PL faculty to teach these courses? =)
-Dan

On Jan 27, 2008, at 11:05 PM, Dipankar Ray wrote:
thanks for the correction - very informative! that'll teach me to just go to the opencourseware site at MIT only...
On that note, I'll point out that many (roughly half?) the undergraduate CS majors at MIT do a 5 year combined bachelor's / master's program. Many of them take the graduate programming languages course (6.821), which has good coverage of semantics and type inference. That said, there's a good deal of skepticism about functional programming among many MIT faculty members just as there is at Berkeley. If you want to hear an entertaining story some day, ask me in person about my Ph.D. thesis defense. -Jan-Willem Maessen

On 1/27/08, Dipankar Ray
Hello Jerzy and Bulat,
Thanks for your perspectives. Bulat, I can understand that you find it shocking that the folks at Moscow University still study Lisp, but I wouldn't be so quick to condemn them for being dinosaurs. After all, they just stopped teaching the SICP course (using Scheme) at MIT, and I don't believe that they replaced it with an intro to CS course that uses (say) Haskell or ML! Nor has Berkeley, far as I know.
Correct -- CS 61A at Berkeley (the 6.001 analogue there) is still taught in Scheme, and I suspect that anyone who wants to change that will have to pry the course out of Brian Harvey's cold, dead hands.
What I mean by this is that if I look at the CS programs at Berkeley, MIT, CMU, I don't see a huge emphasis on PL. Looking now at the MIT opencourseware offerings in EECS, I see no undergrad course that suggests that you'd learn anything about modern type theory.
As a former graduate student instructor for the undergrad compiler class at Berkeley, I can confirm that this statement is mostly but not quite true for Berkeley. (As Dan Licata pointed out, CMU is an exception here; I get the feeling MIT is more like Berkeley than MIT in this respect.) The compiler class there (CS 164) is required for all CS majors, and depending who's teaching it in a given semester, students might well get to see formal presentations of both static and dynamic semantics for a simple object-oriented language (at least the the time when I taught a section of the class, though I see that right now their target language is Python.) Granted, students have the option of either Java or C++ for the implementation languages, which is less than ideal.
Of course we know here of success stories involving modern fp languages. But there is no haskell or ml book that has had close to the influence of K&R's C book. One might argue that adoption on that scale is not the goal of the haskell community (was it Kernighan, Ritchie, or Thompson's goal? I think not), but still, it's weird to me that:
At least based on my own experience and that of my peers, I think people from my generation who got interested in FP languages mostly got turned on by a particular teacher who was really into it, rather than by a particular book.
1) we're clearly on to something, but still
2) many smart people who are interested walk away frustrated (not so easy to learn (is the hardness necessary? perhaps?), relative to K&R, for example).
For #2, note my comment above about teachers. It's always easier and more motivating to learn when you have someone a little more experienced to guide and encourage you. But for FP, there are fewer people around to do the teaching, thus more people get discouraged. I feel like I'm saying something tautological here, though: if nobody teaches, nobody can learn. For some reason or another, I think being self-taught when it comes to FP is hard in a way that being self-taught when it comes to programming in general isn't -- perhaps it's for the same reason that being a self-taught mathematician is hard, though some people pull it off. Personally I think I'm pretty self-motivated, and am self-taught in a lot of other areas, but I don't think I ever would have learned Haskell on my own. So what does that say about programmers as a whole?
3) most of the canonical US universities for CS (MIT, Berkeley, Stanford, CMU, etc) basically don't teach haskell or ML, or even talk much about it, relative to how much they talk about, say, Java.
It's one thing that companies don't move forward; yet another thing that Universities don't either. Why is that? Why, in 2008, is Java taught more than Haskell?
There's really only one reason for this: at least in the United States, universities are becoming more and more like businesses, and faculty feel they have to give students what they want in order to "compete". Computer science undergrads, as a whole, want to learn Java or C++ "so they can learn something that'll be useful and help them get a job." The faculty know this is ridiculous, but a lot of them buckle under anyway because they need to boost their enrollments. The notion of the university as a place where students pay faculty members so that they might benefit from the vaster experience and judgment of the faculty -- which means deferring certain decisions to those who are wiser than you -- seems to be going out the window along with paper registration cards. Moreover: At Berkeley, the faculty didn't teach Haskell or ML because they didn't believe in Haskell or ML. If you personally don't think that a language is ever going to be adopted -- if you think that it's elegant but will never be practical -- it's going to be hard for you to sell it to a 300-person hall full of undergrads, most of whom are likely to start checking email the moment you say something that bores them. Disclaimer: This entire message consists of my personal opinions and does not represent the opinions of anybody else, and probably won't represent my opinions either in another day or two. Cheers, Tim -- Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in doubt "don't you understand / in the day to day / in the face to face / i have to act just as strong as i can / just to preserve a place where i can be who i am" -- Ani DiFranco

On Sun, 27 Jan 2008, Tim Chevalier wrote:
On 1/27/08, Dipankar Ray
wrote: 3) most of the canonical US universities for CS (MIT, Berkeley, Stanford, CMU, etc) basically don't teach haskell or ML, or even talk much about it, relative to how much they talk about, say, Java.
It's one thing that companies don't move forward; yet another thing that Universities don't either. Why is that? Why, in 2008, is Java taught more than Haskell?
There's really only one reason for this: at least in the United States, universities are becoming more and more like businesses, and faculty feel they have to give students what they want in order to "compete". Computer science undergrads, as a whole, want to learn Java or C++ "so they can learn something that'll be useful and help them get a job."
Same here in Germany. Universities become controlled by industry leaders (see "Hochschulfreiheitsgesetz") thanks to Bertelsmann et.al., and universities are considered as producers of programmers for industry. With this reasoning we offered an introduction to C for math students.

Tim Chevalier/Paul Johnson about "cheap computers, expensive programmers"
This is true only if talking to people in high-income nations.
Even in low-income nations, its only false in the short term. If you have skilled programmers with computers and Internet connections then their wages inflate to the world norm. IIRC India is seeing 20%/year wage inflation...
It's true that India seems to be going in that direction, but personally I don't feel I have the background or temerity to suggest that it will definitely happen for the rest of the world.
The issue is less related to the actual income, but to the global politics, sometimes doctrinal. Not always the "invisible hand" of market may easily change things, and if a given nation/country has historical strong views on the "power of the people", the evolution is different than at your place. India doesn't seem to boast that they are numerous and powerful. Chinese do... We shall see.
You may perhaps remember (which you won't, because you are too young) the glorious times when computers became a reality even in Soviet Union. They had at that time plenty of really good mathematicians. But the totalitarian view of the science, plus the nationalistic proudness, made them (the rulers not the scientists...) think and say that with so many good people, there is no need to develop the programming automated tools.
They neglected the programming languages. Russia and their satellites became a kind of desert here not only because of economical problems... Not wishing to refute your general point, I can only note that U.S.S.R did have its own school of computer science in general, and of developing
jerzy.karczmarczuk@info.unicaen.fr writes: programming language implementations in particular. There were Fortran and Algol compilers, there is Refal, after all, which is a purely Soviet invention (and which, for that matter, is still being taught in several Russian universities). So in this particular respect you are definitely wrong. -- S. Y. A(R). A.

Hello Paul, Saturday, January 26, 2008, 11:03:30 PM, you wrote:
* Say "computers are cheap but programmers are expensive" whenever explaining a correctness or productivity feature.
This is true only if talking to people in high-income nations. Even in low-income nations, its only false in the short term. If you have skilled programmers with computers and Internet connections then their wages inflate to the world norm. IIRC India is seeing 20%/year wage inflation for comp-sci graduates.
they know English. and Internet/modern computers are not so easily available in poor countries. try for example to find programmers from Uzbekistan on the net :D -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

* Say "computers are cheap but programmers are expensive" whenever explaining a correctness or productivity feature. This is true only if talking to people in high-income nations.
Is it? Maybe you're right. But historically, computers have been available at all kinds of price ranges, so people chose the price point that fit them. So, for the last 15 years or so already computers have been chosen (in the wealthy countries) to be cheaper than programmers. Is there any reason to think that the same forces aren't at play in lower-income nations? After all, cheap (typically second hand) computers are easy to come by. Stefan

On 1/26/08, Stefan Monnier
* Say "computers are cheap but programmers are expensive" whenever explaining a correctness or productivity feature. This is true only if talking to people in high-income nations.
Is it? Maybe you're right.
Yes -- consider the OLPC project (and its competitors). In some developing nations, $200 for a laptop is still a *lot* to pay (the laptop I'm typing this on cost $1400, purchased on a government grant, and that purchase was treated as nothing.) Labor is a lot cheaper in those places. And there's not much in the way of big government funding (whether for universities or companies) to pay for any of it.
But historically, computers have been available at all kinds of price ranges, so people chose the price point that fit them. So, for the last 15 years or so already computers have been chosen (in the wealthy countries) to be cheaper than programmers.
Is there any reason to think that the same forces aren't at play in lower-income nations? After all, cheap (typically second hand) computers are easy to come by.
Not with the same amount of computing power that computers that run modern application tend to have; a lot of places don't even have reliable *electricity* (so in that case, lots of people and limited machines could be *good*, if the machines aren't working all the time), etc. I don't really know enough to give a more complete answer to your question. But my original point is that saying labor is always expensive and hardware is always cheap by comparison is a culturally biased statement, at least right now, on January 26, 2008. Cheers, Tim -- Tim Chevalier * http://cs.pdx.edu/~tjc * Often in error, never in doubt "I eat too much / I laugh too long / I like too much of you when I'm gone." -- Ani DiFranco

Hello Stefan, Sunday, January 27, 2008, 1:14:46 AM, you wrote:
But historically, computers have been available at all kinds of price ranges, so people chose the price point that fit them. So, for the last 15 years or so already computers have been chosen (in the wealthy countries) to be cheaper than programmers.
Is there any reason to think that the same forces aren't at play in lower-income nations? After all, cheap (typically second hand) computers are easy to come by.
what you mean by cheap computer? one which can't run modern software at all, smth like first IBM PC? :) in poor countries there is not just second-hand computers because local sources doesn't exist (in order to have 100k of second-hand computers people should buy the same 100k new computers a few years ago), and global import of second-hand computers is non-existant -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Simon Peyton-Jones wrote:
1. Small examples of actual code.
I particularly like the lazy way of counting change example (also works for picking items off a menu). The code below show 3 approaches : a function for computing the coins used in each way as a verbose list a function for computing just the total number of ways a simply Monoid that does both at once, which a pretty summary display And it has a short but user friendly main function that drives it. The method used is simple. It considers each value of coin in turn, this loop is done by the foldr. The value being folded is a list where the index into the list is an amount for which change is being made; the value at that list index is the list or count of the ways to make that amount using the coins considered so far. These exploit laziness since the returned lists are infinite and since 'result' is defined recursively for each different value of coin. The example of defining a Monoid is a clear abstraction or generalization of the first two functions.
-- This demonstrates a way to find every eay to make change for a -- given total using a set of coins. -- -- By Chris Kuklewicz, Public Domain import System.Environment(getArgs) import Control.Exception as E(catch) import Control.Monad(when) import Data.List(group) import Data.Monoid(Monoid(mempty,mappend))
computeListOfWays :: [Int] -> [[[Int]]] computeListOfWays coins = foldr includeValue noValues coins where noValues = [] : repeat [] includeValue value oldResult = let (unchangedResult,changedResult) = splitAt value oldResult result = unchangedResult ++ zipWith (++) changedResult (map addCoin result) addCoin = map (value:) in result
computeCountOfWays :: [Int] -> [Integer] computeCountOfWays coins = foldr includeValue noValues coins where noValues = 1 : repeat 0 includeValue value oldResult = let (unchangedResult,changedResult) = splitAt value oldResult result = unchangedResult ++ zipWith (+) changedResult result in result
computeWays :: [Int] -> [Ways] computeWays coins = foldr includeValue noValues coins where noValues = Ways [[]] 1 : repeat mempty includeValue value oldResult = let (unchangedResult,changedResult) = splitAt value oldResult result = unchangedResult ++ zipWith mappend changedResult (map addCoin result) addCoin (Ways list count) = Ways (map (value:) list) count in result
data Ways = Ways [[Int]] Integer
instance Monoid Ways where mempty = Ways [] 0 mappend (Ways list1 count1) (Ways list2 count2) = Ways (list1++list2) (count1+count2)
instance Show Ways where show (Ways list count) = unlines (map summary list) ++ "Count of Ways = " ++ show count ++ "\n" where summary = show . map (\sub -> (length sub,head sub)) . group
coins_US :: [Int] coins_US = [1,5,10,25,50]
coins_UK :: [Int] coins_UK = [1,2,5,10,20,50]
main = do args <- getArgs case args of [] -> error "Pass a number of cents for which to count ways of making change" [x] -> do n <- E.catch (readIO x) (const (error "The argument passed needs to be a number")) when (n<0) (error "The argument passed needs to be a non-negative number") print (computeWays coins_US !! n) _ -> error "Too many parameters, need just one number"

On Wed, 2008-01-23 at 13:29 +0000, Simon Peyton-Jones wrote:
Friends
Over the next few months I'm giving two or three talks to groups of *non* functional programmers about why functional programming is interesting and important. If you like, it's the same general goal as John Hughes's famous paper "Why functional programming matters".
Audience: some are technical managers, some are professional programmers; but my base assumption is that none already know anything much about functional programming.
Now, I can easily rant on about the glories of functional programming, but I'm a biased witness -- I've been doing this stuff too long. So this message is ask your help, especially if you are someone who has a somewhat-recent recollection of realising "wow, this fp stuff is so cool/useful/powerful/etc".
I'm going to say some general things, of course, about purity and effects, modularity, types, testing, reasoning, parallelism and so on. But I hate general waffle, so I want to give concrete evidence, and that is what I particularly want your help with. I'm thinking of two sorts of "evidence":
1. Small examples of actual code. The goal here is (a) to convey a visceral idea of what functional programming *is*, rather than just assume the audience knows (they don't), and (b) to convey an idea of why it might be good. One of my favourite examples is quicksort, for reasons explained here: http://haskell.org/haskellwiki/Introduction#What.27s_good_about_functional_p...
But I'm sure that you each have a personal favourite or two. Would you like to send them to me, along with a paragraph or two about why you found it compelling? For this purpose, a dozen lines of code or so is probably a maximum.
2. War stories from real life. eg "In company X in 2004 they rewrote their application in Haskell/Caml with result Y". Again, for my purpose I can't tell very long stories; but your message can give a bit more detail than one might actually give in a presentation. The more concrete and specific, the better. E.g. what, exactly, about using a functional language made it a win for you?
If you just reply to me, with evidence of either kind, I'll glue it together (regardless of whether I find I can use it in my talks), and put the result on a Wiki page somewhere. In both cases pointers to blog entries are fine.
This recent blog entry about PLT Scheme may be useful: http://blog.plt-scheme.org/2007/11/getting-rid-of-set-car-and-set-cdr.html
participants (41)
-
ajb@spamcop.net
-
Alex Young
-
Andrew Cheadle
-
Artem V. Andreev
-
Brian Sniffen
-
Bulat Ziganshin
-
ChrisK
-
Conal Elliott
-
Dan Licata
-
Derek Elkins
-
Dipankar Ray
-
Don Stewart
-
Evan Laforge
-
Hans van Thiel
-
Henning Thielemann
-
Isaac Dupree
-
Jan-Willem Maessen
-
jerzy.karczmarczuk@info.unicaen.fr
-
Jonathan Cast
-
Lennart Augustsson
-
Lutz Donnerhacke
-
Maarten Hazewinkel
-
Mattias Bengtsson
-
Maxime Henrion
-
Michael Reid
-
Michael Vanier
-
Michał Pałka
-
Paul Johnson
-
Peter Hercek
-
Robin Green
-
Ryan Dickie
-
Sebastian Sylvan
-
Simon Peyton-Jones
-
Stefan Kersten
-
Stefan Monnier
-
Sterling Clover
-
Thomas Schilling
-
Tim Chevalier
-
Victor Nazarov
-
Wolfgang Jeltsch
-
Yaakov Nemoy