
Okay. What's really bothering me is that I can't find any good indication of what to do to get IO faster. Do I need to FFI the whole thing and have a C library give me large chunks? Or can I get by with hGet/PutArray? If so, what sizes should they be? Should I use memory mapped files?
hGet/PutArray should be pretty quick. For small requests, they get the usual buffering treatment that other Handle operations have, and for large requests the buffer is bypassed. For reading a file in one go, hGetArray (or hGetBuf) is the way to go. GHC itself uses hGetBuf for reading source files, hGetArray for reading interface files, and hPutBuf for writing output. Memory mapped files (mmap) should be even quicker. But then you'll have to use peek & co from Foreign to access the bytes. Cheers, Simon

Hi, Am I doing something wrong, or did profiling performance drop an order of magnitude with GHC 6.2? When I compile my program with '-prof -auto-all', it takes about ten times as long to run as it does without. I use -O2 in both cases, and run without any run-time profiling options switched on. (The reported time, OTOH, seems about right) I though that previously profiling-compiling programs would only have marginal effects, and only running with heap profiling on would really slow things down. If this is a recent 'feature' and not just my brain getting its wires crossed again, is there any way to alleviate this (downgrade to x.y, build CVS HEAD, whatever)? (GHC 6.2 on Linux, tested with RPM package and Gentoo binary distribution) -kzm -- If I haven't seen further, it is by standing in the footprints of giants
participants (2)
-
Ketil Malde
-
Simon Marlow