Game of life in haskell.

Hi Cafe. I've made a basic game of life implementation with Haskell and OpenGL: https://github.com/sordina/Life/ The basic premise is to generate a random snapshot, then iterate the successor function on it to create an infinite list of snapshots, then output them using OpenGL. I haven't really run into any major issues aside from not being able to figure out how to automatically rerun the display function, however I believe this is an OpenGL problem, and not related to the Haskell side (press 'n' to move to the next snapshot for now). To run the game: cabal configure && cabal build && ./dist/build/life/life <size of square life matrix> I'm intending to improve the performance, and add more advanced features, but I thought I'd get some feedback first. Can anyone see a way to make this code more idiomatic, or any optimizations I might have missed? I'm still fairly new to Haskell and I haven't really come to grips with monad-transformers and the like yet, so if any advanced techniques are applicable here it would really help me link what I think I understand to reality :-) Also, is something like this worth uploading to Hackage, or should I leave it on github? Thanks guys.

2010/2/2 Lyndon Maydwell
Hi Cafe.
I've made a basic game of life implementation with Haskell and OpenGL: https://github.com/sordina/Life/
I'm intending to improve the performance, and add more advanced features, but I thought I'd get some feedback first. Can anyone see a way to make this code more idiomatic, or any optimizations I might have missed?
Arrays are not fully "idiomatic" for Haskell as they are hard to update functionally. Also, their use incurs quadratic update cost for simple scene with two gliders that fly in different directions. So I advice you to use Data.Map.Map and Data.Set.Set data structures. How? It's an easy question. ;)

I chose the array mainly for the fast lookup time compared to lists,
are you suggesting something like creating a "Map (X,Y) Health"? I'm
not currently updating any structures, rather creating the successor
from scratch. I can see how the map may work very well for the sparse
nature of non-early life games now that I think of it.
On Tue, Feb 2, 2010 at 11:48 PM, Serguey Zefirov
2010/2/2 Lyndon Maydwell
: Hi Cafe.
I've made a basic game of life implementation with Haskell and OpenGL: https://github.com/sordina/Life/
I'm intending to improve the performance, and add more advanced features, but I thought I'd get some feedback first. Can anyone see a way to make this code more idiomatic, or any optimizations I might have missed?
Arrays are not fully "idiomatic" for Haskell as they are hard to update functionally.
Also, their use incurs quadratic update cost for simple scene with two gliders that fly in different directions.
So I advice you to use Data.Map.Map and Data.Set.Set data structures.
How? It's an easy question. ;)

2010/2/2 Lyndon Maydwell
I chose the array mainly for the fast lookup time compared to lists, are you suggesting something like creating a "Map (X,Y) Health"? I'm not currently updating any structures, rather creating the successor from scratch. I can see how the map may work very well for the sparse nature of non-early life games now that I think of it.
Because your Health is basically Bool, you can use Set (X,Y) for a set of live objects. Creation of new Array is (without knowing some subtle details) is O(max coordinates difference between live cells). Creation of new Set (X,Y) is O(NlogN) (N = number of live objects). Most of the cells in Life are empty, so the Set/Map approach is faster. Also it leads to very concise code. Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.

I'm avoiding hard-coding bools anywhere as I intend to allow
fuzzy-representations at some point.
On Wed, Feb 3, 2010 at 12:10 AM, Serguey Zefirov
2010/2/2 Lyndon Maydwell
: I chose the array mainly for the fast lookup time compared to lists, are you suggesting something like creating a "Map (X,Y) Health"? I'm not currently updating any structures, rather creating the successor from scratch. I can see how the map may work very well for the sparse nature of non-early life games now that I think of it.
Because your Health is basically Bool, you can use Set (X,Y) for a set of live objects.
Creation of new Array is (without knowing some subtle details) is O(max coordinates difference between live cells). Creation of new Set (X,Y) is O(NlogN) (N = number of live objects). Most of the cells in Life are empty, so the Set/Map approach is faster. Also it leads to very concise code.
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.

2010/2/2 Lyndon Maydwell
I'm avoiding hard-coding bools anywhere as I intend to allow fuzzy-representations at some point.
What is the meaning of fuzzy Game of Life? Where can I read about? Or you're planning to average field over some time interval? In the latter case you still will be faster using Sets, converting them into Maps using singleton, doing some unionsWith and then map with (/timeInterval).

On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote:
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.
What gave you that impression? -- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e

2010/2/2 Jon Harrop
On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote:
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.
What gave you that impression?
Discussion in Russian Smalltalk User Group. A solution in APL: http://catpad.net/michael/apl/ Some experience before (my own first implementation and some of my friends). Or, you're asking about scalability and speed?

On Tuesday 02 February 2010 18:23:59 Serguey Zefirov wrote:
2010/2/2 Jon Harrop
: On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote:
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.
What gave you that impression?
Discussion in Russian Smalltalk User Group. A solution in APL: http://catpad.net/michael/apl/ Some experience before (my own first implementation and some of my friends).
Or, you're asking about scalability and speed?
I meant the scalability and speed. An imperative solution should be simpler, more scalable and faster than any purely functional solution. -- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e

On Tue, Feb 2, 2010 at 11:54 AM, Jon Harrop
On Tuesday 02 February 2010 18:23:59 Serguey Zefirov wrote:
2010/2/2 Jon Harrop
: On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote:
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.
What gave you that impression?
Discussion in Russian Smalltalk User Group. A solution in APL: http://catpad.net/michael/apl/ Some experience before (my own first implementation and some of my friends).
Or, you're asking about scalability and speed?
I meant the scalability and speed. An imperative solution should be simpler, more scalable and faster than any purely functional solution.
That's a pretty strange comment. Why do you think an imperative solution is simpler, faster and more scalable? If functional programming can't provide any one of those, it's not worth anything, and based on the membership in this list, the interest in it these days, and the fact that I've seen many occasions where functional programming lends itself to a faster implementation (in terms of time to implement and test) that's actually readable sooner than a lot of imperative approaches, I find your claim to be quite contrary and smells of "trollishness". Dave
-- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

leimy2k:
On Tue, Feb 2, 2010 at 11:54 AM, Jon Harrop
wrote: On Tuesday 02 February 2010 18:23:59 Serguey Zefirov wrote: > 2010/2/2 Jon Harrop
: > > On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote: > >> Actually, your solution with arrays is the most often occured solution > >> an imperative programmer will come with. It is simple but not scalable > >> and not particularly fast. > > > > What gave you that impression? > > Discussion in Russian Smalltalk User Group. > A solution in APL: http://catpad.net/michael/apl/ > Some experience before (my own first implementation and some of my > friends). > > Or, you're asking about scalability and speed? I meant the scalability and speed. An imperative solution should be simpler, more scalable and faster than any purely functional solution.
That's a pretty strange comment. Why do you think an imperative solution is simpler, faster and more scalable?
Don't feed the troll^[1] -- Don [1]. Top 10 signs you have a troll (pick any you think relevant) - http://www.haskell.org/haskellwiki/Protect_the_community/Notes#Identify_pois... - Conceit + won't engage/argue with other positions + makes sweeping claims. empty statements about a project's success + reopens topics from years ago

I understood the ""simple but not scalable and not particularly fast" claim *NOT* as a claim about imperative languages but in context as a claim about using two-dimensional arrays of booleans to implement Conway's Life. One message in this thread has already pointed to a solution (in C) that in effect uses a bit-packed sparse representation to achieve high speed. The point is that that approach takes time (and space) per generation proportional to the number of occupied cells, not the size of the space, and that is the basis of the "scaling" claim. A simple Haskell analogue, which has the right asymptotic cost, would be to represent a Life generation by a list of (row_number, [col_no, ..., col_no]) pairs in strictly ascending order of row number, where each pair contains a list of the numbers of the occupied columns in strictly ascending order. The space is (number of occupied cells * a constant) + (number of occupied rows * another constant). Computing the next generation then amounts to a bunch of 3-way merges.

On Tuesday 02 February 2010 23:54:58 Richard O'Keefe wrote:
I understood the ""simple but not scalable and not particularly fast" claim *NOT* as a claim about imperative languages but in context as a claim about using two-dimensional arrays of booleans to implement Conway's Life.
Fair enough. If you want to model either an infinite board or a very sparse one then a sparse data structure will be asymptotically faster. But they won't be simpler and asymptotically efficient mutable data structures will be faster again.
One message in this thread has already pointed to a solution (in C) that in effect uses a bit-packed sparse representation to achieve high speed. The point is that that approach takes time (and space) per generation proportional to the number of occupied cells, not the size of the space, and that is the basis of the "scaling" claim. A simple Haskell analogue, which has the right asymptotic cost, would be to represent a Life generation by a list of (row_number, [col_no, ..., col_no]) pairs in strictly ascending order of row number, where each pair contains a list of the numbers of the occupied columns in strictly ascending order. The space is (number of occupied cells * a constant) + (number of occupied rows * another constant). Computing the next generation then amounts to a bunch of 3-way merges.
That will be a lot faster than using Map or Set but you're still paying a lot for the indirections between cons cells. Mutating an array in-place would be significantly faster and no more difficult to code correctly because this is such a trivial algorithm. -- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e

On 3 Feb 2010, at 02:04, Jon Harrop
On Tuesday 02 February 2010 23:54:58 Richard O'Keefe wrote:
One message in this thread has already pointed to a solution (in C) that in effect uses a bit-packed sparse representation to achieve high speed. The point is that that approach takes time (and space) per generation proportional to the number of occupied cells, not the size of the space, and that is the basis of the "scaling" claim. A simple Haskell analogue, which has the right asymptotic cost, would be to represent a Life generation by a list of (row_number, [col_no, ..., col_no]) pairs in strictly ascending order of row number, where each pair contains a list of the numbers of the occupied columns in strictly ascending order. The space is (number of occupied cells * a constant) + (number of occupied rows * another constant). Computing the next generation then amounts to a bunch of 3-way merges.
That will be a lot faster than using Map or Set but you're still paying a lot for the indirections between cons cells. Mutating an array in-place would be significantly faster and no more difficult to code correctly because this is such a trivial algorithm.
Richard's description is pretty much exactly what I had in mind for a
Haskell version of my C code. The C translates to Haskell so well
because its data structure is immutable: the new generation is written
to fresh memory then the old generation becomes garbage. The old
generation is scanned in the order it is written (albeit by three
pointers with a row between each one) which is inconvenient for linked
lists. However the algorithm is symmetrical so it doesn't mind
processing the universe in alternating directions, so long as the
rendering code can cope :-) The indirections are less bad than they
might be because the code will always be following a pointer to an
adjacent memory location, because of the algorithm's simple allocation
behaviour. But it'll be about twice as slow as the C equivalent
because it uses twice the memory.
Efficient mutate-in-place Life algorithms become disgustingly
complicated before they can beat listlife.
Tony.
--
f.anthony.n.finch

On Tuesday 02 February 2010 18:44:25 David Leimbach wrote:
On Tue, Feb 2, 2010 at 11:54 AM, Jon Harrop
wrote: I meant the scalability and speed. An imperative solution should be simpler, more scalable and faster than any purely functional solution.
That's a pretty strange comment. Why do you think an imperative solution is simpler, faster and more scalable?
Mutation can avoid lots of unnecessary allocations and indirections with minimal risk of error in this case.
If functional programming can't provide any one of those, it's not worth anything,
I doubt programming paradigms live or die according to whether or not they can implement Conway's Game of Life simply and efficiently.
and based on the membership in this list, the interest in it these days, and the fact that I've seen many occasions where functional programming lends itself to a faster implementation (in terms of time to implement and test) that's actually readable sooner than a lot of imperative approaches...
Of Conway's Game of Life? -- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e

On Tue, Feb 2, 2010 at 7:33 PM, Gregory Crosswhite < gcross@phys.washington.edu> wrote:
On Feb 2, 2010, at 7:11 PM, Jon Harrop wrote:
I doubt programming paradigms live or die according to whether or not they can implement Conway's Game of Life simply and efficiently.
This makes an awesome quote. :-)
- Greg
This whole thread has been rather odd. The sort of processing that goes on in Conway's Game of Life is pretty common. I've seen it implemented a good many different ways, including with GCD from apple where each cell could potentially be its own thread (good scalability test for apple it seems... neat code, doesn't even use Cocoa). I've got some very simple code for Conway's Game of Life, but it is not my own. It's from Dr. Graham Hutton's excellent Haskell introductory book. Doesn't even use external libraries from the Prelude. It's short and easy to read, and renders by plotting characters a terminal using escape sequences (so I guess it's VT100 at least required to run it). I'm left somewhat confused by people believing they know enough about functional programming to make claims like these, or that the processing used in Conway's Game of Life might not be important. As someone who's worked in HPC for 1/3 his life and his entire career, I find this to be a pretty closed minded approach to computer science. The worst form of ignorance is when you think you've got the answer to everything already... It's sad to see that going on here. Dave
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Gregory Crosswhite
On Feb 2, 2010, at 7:11 PM, Jon Harrop wrote:
I doubt programming paradigms live or die according to whether or not they can implement Conway's Game of Life simply and efficiently.
This makes an awesome quote. :-)
You mean Jon actually said something clever and interesting for once? :o -- Ivan Lazar Miljenovic Ivan.Miljenovic@gmail.com IvanMiljenovic.wordpress.com

2010/2/2 Jon Harrop
Or, you're asking about scalability and speed?
I meant the scalability and speed. An imperative solution should be simpler, more scalable and faster than any purely functional solution.
So, please, provide any and we'll discuss difference between ours. Mine is here: http://thesz.livejournal.com/1028039.html (sorry for russian text, the solution is pure Haskell anyway;)

Den 2. feb. 2010 kl. 19.26 skrev Jon Harrop
On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote:
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.
What gave you that impression?
If I may butt in here: to get a scalable, fast Game of Life, you should look into Hashlife (by Gosper?) as exemplified in the open source application Golly. It gives an astonishing speedup, and it would be interesting to see it expressed in Haskell. Regards, Arne D Halvorsen
-- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, 2 Feb 2010, Arne D Halvorsen wrote:
If I may butt in here: to get a scalable, fast Game of Life, you should look into Hashlife (by Gosper?) as exemplified in the open source application Golly. It gives an astonishing speedup, and it would be interesting to see it expressed in Haskell.
I played around with implementing hashlife a few years ago. It is a truly
beautiful algorithm, and mind-expanding if all you know is representing
Life as an array of booleans.
The problem with implementing it in Haskell is it relies on persistent
object identities (of the branches of a quadtree) that it hashes in order
to memoize quadtree creation. This makes what ought to be a beautifully
quasi-functional algorithm look ugly and imperative.
Tony.
--
f.anthony.n.finch

http://dotat.at/prog/life/hslife.hs [], Edgar On Tue, 02/Feb/2010 at 21:10 +0100, Arne D Halvorsen wrote:
Den 2. feb. 2010 kl. 19.26 skrev Jon Harrop
: On Tuesday 02 February 2010 16:10:05 Serguey Zefirov wrote:
Actually, your solution with arrays is the most often occured solution an imperative programmer will come with. It is simple but not scalable and not particularly fast.
What gave you that impression?
If I may butt in here: to get a scalable, fast Game of Life, you should look into Hashlife (by Gosper?) as exemplified in the open source application Golly. It gives an astonishing speedup, and it would be interesting to see it expressed in Haskell.
Regards, Arne D Halvorsen
-- Dr Jon Harrop, Flying Frog Consultancy Ltd. http://www.ffconsultancy.com/?e _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Tue, 2 Feb 2010, Serguey Zefirov wrote:
Creation of new Array is (without knowing some subtle details) is O(max coordinates difference between live cells). Creation of new Set (X,Y) is O(NlogN) (N = number of live objects). Most of the cells in Life are empty, so the Set/Map approach is faster. Also it leads to very concise code.
I have a small and relatively quick Life algorithm that's based on lists,
so it should translato to Haskell quite nicely.
The basic representation of the Life universe is a sorted list of the
co-ordinates of live cells. You can compute a new generation in one scan
of the list, or rather a kind of zip3 that scans a 3x3 box along three
adjacent rows, skipping empty space. It's much faster if, instead of
storing one live cell in each list element, you store a small word-sized
bitmap. You can then use SIMD-within-a-register to combine the three rows
of cells and emit a new bitmap if it is non-zero.
Have a look at http://dotat.at/prog/life/life.html for the story of how I
developed the algorithm, including C source (about 40 lines of code).
I've written a couple of other articles about SWAR techniques for Life:
http://fanf.livejournal.com/81169.html
http://fanf.livejournal.com/93032.html
Getting the bits onto the screen quickly is the hardest part :-)
Tony.
--
f.anthony.n.finch
participants (11)
-
Arne D Halvorsen
-
David Leimbach
-
Don Stewart
-
edgar@ymonad.com
-
Gregory Crosswhite
-
Ivan Lazar Miljenovic
-
Jon Harrop
-
Lyndon Maydwell
-
Richard O'Keefe
-
Serguey Zefirov
-
Tony Finch