
I've been playing around with the "Hashes, Part II" code from the shootout. I wanted to try to implement this test using Data.HashTable instead of Data.FiniteMap to see if that would buy us anything. In fact, the HashTable implementation is consistantly faster than FiniteMap, but not by a lot (thus making the transition to the IO monad not worthwhile IMO). The interesting thing, however, is that at a certain number of iterations (106 in my case), the Hashtable code segfaults. GDB shows that it is blowing the top off the program stack and dying when it tries to write to kernel space. (can't write to 0xbfffffff). My question is, why does this happen? Is it well known that sequence_ ing longish lists has this effect? It seems to me that there is no reason to consume stack for sequences of IO like this (especially using sequence_ or >> where we ONLY care about the side effects of the operation). I don't have a stong grasp of how the RTS works, perhaps someone could explain in small words? The code in question is attached. As you can see, I've tried several approaches to reduce the program stack usage, but they seem to generate very similar code. -- Robert robdockins@fastmail.fm
participants (1)
-
Robert