
On Mon, 2005-05-30 at 19:18 +0100, Duncan Coutts wrote: [..]
Going back to the lexer, it now produces exactly the same output as the original lexer (including positions and unique names). Sadly it seems to have got quite a bit slower for reasons I don't quite understand. In particular making it monadic (which we need to do because of) seems to make it rather slower. It is now taking 6 seconds rather than 2 and so is now only a little faster that the original lexer. Though on the positive side it means that if the lexer is taking 6 out of the 8 second total then the parser is only taking 2 seconds which is quite good.
Ok, I'm impressed, too. But was the parser the culprit? It did use a lot of space, but then most of the time in our current setup is spent in serialisation. So if I understand your intention you mainly try to improve the memory footprint, not the compilation time? Nice effort, Axel.