
Hello Krzysztof, Sunday, July 20, 2008, 12:49:54 AM, you wrote: on the 32-bit computers 36x memreqs for storing large strings in memory is a rule, on 64-bit ones - 72x
I forgot to mention that the memory consumption is several times higher than file size. On 8,3 Mb file:
532 MB total memory in use (4 MB lost due to fragmentation).
Having that 8 Mb in memory is not the problem. 532 Mb is another story. In general, the program consumes roughly 64 times more memory than file size and it scales linearly.
Best regards Christopher Skrzetnicki
On Sat, Jul 19, 2008 at 9:52 PM, Chaddai Fouche
wrote: 2008/7/19 Krzysztof Skrzetnicki < gtener@gmail.com>: Hi all
1) Profiling shows that very simple functions are source of great memory and time consumption. However, if I turn them off and simply print their input arguments instead, the overall time and memory consumption doesn't change. But now another function is acting badly. My guess: somehow the cost of Parsec code is shifted into whatever function is using it's output. Let's see:
Are you using Parsec to parse the whole file ? Then your problem is there : Parsec needs to read and process the whole file before it can give us any output since it thinks it could have to give us an error instead and it can't be sure of that before he has read the whole thing... In your case, your problem is such that you would prefer to treat the file as a stream, isn't it ? There are some parser library that can give output lazily (look at polyparse flavour), another option would be to only use Parsec where you need it and just read and print the ordinary text for example.
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com