
C'mon Andrew - how about some facts, references?
2009/12/10 Andrew Coppin
1. Code optimisation becomes radically easier. The compiler can make very drastic alterations to your program, and not chance its meaning. (For that matter, the programmer can more easily chop the code around too...)
Which code optimizations?
From a different point of view, whole program compilation gives plenty of opportunity for re-ordering transformations / optimization - Stalin (now Stalinvlad) and MLton often generated the fastest code for their respective (strict, impure) languages Scheme and Standard ML.
Feel free to check the last page of the report here before replying with the Shootout - (GHC still does pretty well though easily beating Gambit and Bigloo): http://docs.lib.purdue.edu/ecetr/367/
2. Purity leads more or less directly to laziness, which has several benefits:
Other way round, no?
2a. Unecessary work can potentially be avoided. (E.g., instead of a function for getting the first solution to an equation and a seperate function to generate *all* the solutions, you can just write the latter and laziness gives you the former by magic.)
Didn't someone quote Heinrich Apfelmus in this list in the last week or so: http://www.haskell.org/pipermail/haskell-cafe/2009-November/069756.html "Well, it's highly unlikely that algorithms get faster by introducing laziness. I mean, lazy evaluation means to evaluate only those things that are really needed and any good algorithm will be formulated in a way such that the unnecessary things have already been stripped off." http://apfelmus.nfshost.com/quicksearch.html
2b. You can define brand new flow control constructs *inside* the language itself. (E.g., in Java, a "for" loop is a built-in language construct. In Haskell, "for" is a function in Control.Monad. Just a plain ordinary function that anybody could write.)
Psst, heard about Scheme & call/cc?
2c. The algorithms for generating data, selecting data and processing data can be seperated. (Normally you'd have to hard-wire the stopping condition into the function that generates the data, but with lazy "infinite" data structures, you can seperate it out.)
Granted. But some people have gone quite some way in the strict world, e.g.: http://users.info.unicaen.fr/~karczma/arpap/FDPE05/f20-karczmarczuk.pdf
2d. Parallel computation. This turns out to be more tricky than you'd think, but it's leaps and bounds easier than in most imperative languages.
Plenty of lazy and strict, pure and impure languages in this survey: http://www.risc.uni-linz.ac.at/people/schreine/papers/pfpbib.ps.gz
3. It's much harder to accidentally screw things up by modifying a piece of data from one part of the program which another part is still actually using. (This is somewhat similar to how garbage collection makes it harder to free data that's still in use.)
In a pure language I'd like to think its impossible... Best wishes Stephen