
I'm afraid the _meta_ programming aspect of the image processing project may be overlooked. Joel Reymont wrote:
I think the issue wasn't using functional programming for large image processing, it was using Haskell. OCaml is notoriously fast and strict. Haskell/GHC is... lazy.
Well, in the raster image processing project, a dialect of OCaml was used to _generate_ code. If the author used offshoring (which I think they did later), the generated code was in C. That C code can be used stand-alone or be linked with some other C (Java, etc) code. Our FFT paper did exactly that: our MetaOCaml program produced C code, which we plugged as it was in the FFTW testing framework (written in pure C) for benchmarking. By 'plugged' above I meant moving the C code file from one directory to another. We used both GCC and Intel C compilers for benchmarking. Offshoring can also produce Fortran code. It's quite feasible to generate Verilog so we can program an FPGA and get image processing even faster. The generator doesn't have to be a speed demon; it is not that relevant if the generator is lazy or strict. What really matters if the generator can be easily judged correct. It immensely matters that the generated code has correctness properties; at least, it should be well-formed and well-typed (and preferably, has some other properties, like space bounds). It is disheartening to see errors when compiling the generated code (as happens with some other generators), because it is very difficult to trace these errors back to the generator. Here's the reference to another, quite large and very real project http://www.spiral.net/ which generates the fastest ever FFT, DCT, etc. codes for variety of architectures. The source language is a DSL for linear algebra. The point is that highest-performance computing nowadays is all about code generation/meta-programming. And in this area, functional programming and Haskell have definite advantage.