Lazy parser with parsec.

I'm trying to figure out how to write a simple parser in Parsec to tokenize a subset of RTF. The problem is that I haven't been able to come up with a way of writing the parser that doesn't try consuming all of the input just to return the first token. The 'many' primitive's implementation uses an accumulator, and obviously has to parse to the end. Trying to iterate myself causes stack overflows on large inputs. Does anyone know of any existing Parser parsers that don't consume their entire input, or am I probably best off making my own parser. Thanks, David

David Brown
Does anyone know of any existing Parser parsers that don't consume their entire input, or am I probably best off making my own parser.
http://www.cs.york.ac.uk/fp/polyparse In particular, the module Text.ParserCombinators.PolyLazy. Regards, Malcolm

David Brown
Does anyone know of any existing Parser parsers that don't consume their entire input, or am I probably best off making my own parser.
Thomas Zielonka published his Parsec combinator lazyMany on this list a couple of times, Google for it. Here is my application of his idea: lazyMany :: Parser a -> SourceName -> String -> [a] lazyMany p filename contents = lm state0 where Right state0 = parse getParserState filename contents -- get an initial state lm state = either (error . show) id (parse p' "" "") where p' = setParserState state >> choice [eof >> return [], do x <- p state' <- getParserState return (x:lm state')] -- Feri.

Wagner Ferenc wrote:
David Brown
writes: Does anyone know of any existing Parser parsers that don't consume their entire input, or am I probably best off making my own parser.
Thomas Zielonka published his Parsec combinator lazyMany on this list a couple of times, Google for it. Here is my application of his idea:
lazyMany :: Parser a -> SourceName -> String -> [a]
Excellent, exactly what I was looking for. Thanks, David Brown
participants (3)
-
David Brown
-
Malcolm Wallace
-
Wagner Ferenc