
I understood the ""simple but not scalable and not particularly fast" claim *NOT* as a claim about imperative languages but in context as a claim about using two-dimensional arrays of booleans to implement Conway's Life. One message in this thread has already pointed to a solution (in C) that in effect uses a bit-packed sparse representation to achieve high speed. The point is that that approach takes time (and space) per generation proportional to the number of occupied cells, not the size of the space, and that is the basis of the "scaling" claim. A simple Haskell analogue, which has the right asymptotic cost, would be to represent a Life generation by a list of (row_number, [col_no, ..., col_no]) pairs in strictly ascending order of row number, where each pair contains a list of the numbers of the occupied columns in strictly ascending order. The space is (number of occupied cells * a constant) + (number of occupied rows * another constant). Computing the next generation then amounts to a bunch of 3-way merges.