
barsoap:
"Brettschneider, Matthias"
wrote: Thx for your hints, I played around with them and the performance gets slightly better. But the major boost is still missing :)
I noticed, that one real bottleneck seems to be the conversion of the array back into a list. The interesting part is, if I use the elems function (Data.Array.Base) the performance is about 4x better then with my own function. So I thought, I write my own version of elems, (that just converts a part of the array to a list) and I fall back into the same performance as my first approach.
To make a long story short, here is the library code: elems arr = case bounds arr of (_l, _u) -> [unsafeAt arr i | i <- [0 .. numElements arr - 1]
And my version: boundedElems arr = case bounds arr of (_l, _u) -> [unsafeAt arr i | i <- [1737 .. 1752]]
Is there a reason, why the library version is 4 times faster, than mine?
Uhhhmmm... I've got no idea. But as the list is constructed lazily, you shouldn't expect a real speedup by being lazy manually, anyway. You might want to try the stream-fusion list implementation of lists, or get rid of lists, alltogether.
Yes, avoid constructing the list, if you're not doing that in C.