
Hi all,
So my theory now is: I do a large number of lookups.
Try using Data.Array.Base.unsafeRead (and maybe ata.Array.Base.unsafeWrite). These avoid the bounds checking on the index each time you lookup something in the array.
Right. Also keep an eye on the GC time (+RTS -sstderr) if you're using boxed mutable arrays - we don't do card-marking, so GC is pretty sub-optimal when it comes to large mutable boxed arrays. Decreasing the number of GCs with +RTS -A<big> can help if you're suffereing from this -
but always try with and without, sometimes it makes things worse by making less effective use of the L2 cache.
Thanks for you advice. I changed all the reads to unsafeReads and all the writes to unsafeWrites. That indeed sped up things a bit (about 10 percent on the total running time). I also checked the GC time, but that is (only) 30% of the total running time. So still, the program with ST array is about 3 times slower than the program with Data.Map. Also, the function lookupEq that looks up the states of the equations from the array, is responsible for 20% of the running time, and 19% of the allocated memory. I would say that is should allocate almost no memory: lookupEq :: Int -> MyMonad (Maybe EquationState) lookupEq cid = do s <- get mEs <- lift $ unsafeRead s cid return mEs type MyMonad a = forall s2. StateT (Eqns s2) (ST s2) a type Eqns s2 = STArray s2 Int (Maybe EquationState) data EquationState = EQS {cDefinition::Equation, usedBy::Map.Map Int (), ...} How can it be the lookupEq allocates memory? Is is not that, because of some tricky strictness/lazyness thing about ST array, unsafeRead causes the element that is read from the array to be copied in memory? Regards, Robert