
On Dec 11, 2009, at 3:50 AM, John D. Earle wrote:
David, think of the machine as being the earth and laziness is in the clouds.
It reads so much better as "laziness is in the heavens".
Strict evaluation is closer to the machine.
It doesn't have to be. Graph reduction hardware has been built in the past and could be again.
The relationship between a lazy algorithm and the earth is abstract; hence, it will make creating algorithms especially efficient ones difficult.
All programming (except possibly assembly coding, and given the existence of optimising assemblers, even that) is abstract. In fact it had better be abstract, because I don't want to write different programs for SPARCs, PowerPCs, x86s, &c. I'd really rather not even write different programs for 32-bit and 64-bit environments. Given the massive amounts of stuff happening in modern processors (like the dynamic register renaming &c in x86s, deep write buffers, out-of-order execution, short SIMD vector instructions), even simple C code is a *long* way from what is really happening. To use your analogy, if the machine is the earth, C programming is floating around in a balloon _just under_ the clouds. Paradoxically, working at an abstract level can make creating efficient algorithms MUCH EASIER. Recently, I was looking at some Java code from a new book about numerical algorithms for scientists and engineers in Java. Because I have lots of uses for the Singular Value Decomposition, I chose to look at the SVD code. Version speed reason Java from book 1 rewritten in C 2 a[i][j] doesn't indirect twice transpose the matrices 4 goes with the grain of memory use LAPACK 7 algorithm, blocking for cache? Working at an abstract level ("I want (u,s,v) = svd a") instead of rolling my own low level code means that I can get _faster_ code. (There's no reason why Java couldn't call LAPACK through JNI, and if I had much numeric code to do in Java, that's what I'd do.) With the SVD (and similar things) as basic building blocks, it's much easier to develop interesting algorithms. For example, if I wanted to write my own code for Correspondence Analysis, it would be far more sensible to develop the algorithm in R (the free S) or Matlab or Octave, and _not_ write my own array loops, than to write in C. I'd probably get something much faster. Oh, and there's a new SVD algorithm being worked on for LAPACK, which is available for real numbers only in the current LAPACK release. Supposedly it's not only more accurate but faster. Change one line in my code to call that, and I get a very nice benefit from abstraction!
All of this is still a work in progress. The Haskell creed appears to be, This is the way so stick to it!
There is no Haskell "creed". People use Haskell for different reasons. Many Haskell programmers are totally pragmatic about it. Remember, PROGRAMMING IS HARD. If Haskell lets people _write_ correct programs faster, that's an enormous advantage, even if they don't _run_ faster. Once you have something actually working, you can worry about speed. The Java code I mentioned above came from a *recently* published book. Here's someone willing to put up with a factor of 7 loss of performance in order to get the other benefits of Java and persuading a publisher that a lot of other people will be happy with it too.
The idea appears to be that by sticking to the program the problems will be overcome in time and we will be left with all the glorious goodness. At one time it was not possible to achieve the sort of efficiency that Haskell achieves as a matter of course.
Abstraction makes it comparatively easy to switch over from String to ByteString. Purity means that taking advantage of multicore machines is already much easier than in C (with pthreads), C++ (with TBB), or Java (with itself). To be honest, I don't expect getting high performance out of Haskell to ever be easy. But then, getting high performance out of C or Fortran isn't easy either.