
Hi -- I'm loading a big file (~2G), again, with 72-byte C structs. I've done pretty well [thanks to everyone] interpreting the fields, but I'm finding the traversal of the file to be pretty inefficient. I do the following: breakUp s | L.null s = [] | otherwise = h:(breakUp r) where (h,r) = L.splitAt 72 s Running this on the 2G file blows up the stack pretty quickly, taking the first 1 million records (there are 20M of them) with a big stack parameter gives about 25% productivity, with GC taking the other 75%. My understanding is that while this looks tail recursive, it isn't really because of laziness. I've tried throwing seq operators around, but they don't seem to do much to help the efficiency. Any help's appreciated. Ranjan