Need programming advice for Network Protocol Parsing

Hi all, I'd like to write a client app that communicates with a server over TCP/IP. My question is in regard which parser to use for the servers responses. I'm quite familiar with parsec (2.x) but I'm not sure if it's the right choice for this. The code would necessarily constantly be switching between checking for input, interpreting and then responding. Any suggestions? Günther

2010/10/27 Günther Schmidt
My question is in regard which parser to use for the servers responses. I'm quite familiar with parsec (2.x) but I'm not sure if it's the right choice for this. The code would necessarily constantly be switching between checking for input, interpreting and then responding.
Attoparsec is kind of made for this: http://hackage.haskell.org/package/attoparsec

I'm occasionally working on making a friendly yet performant library that
simultaneously builds parsers and generators, but it's non-trivial. If you
want to see the general idea, there's a Functional Pearl on pickler
combinators from a few years back that you can probably play with.
But for a real network protocol that you need to implement today, I'd go
with attoparsec or Data.Binary.
2010/10/27 Günther Schmidt
Hi all,
I'd like to write a client app that communicates with a server over TCP/IP.
My question is in regard which parser to use for the servers responses. I'm quite familiar with parsec (2.x) but I'm not sure if it's the right choice for this. The code would necessarily constantly be switching between checking for input, interpreting and then responding.
Any suggestions?
Günther
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I'm occasionally working on making a friendly yet performant library that simultaneously builds parsers and generators, but it's non-trivial. If you
I'm probably missing something in the "friendly yet performant" requirements, but I never quite understood the difficulty: A typical translation of grammar to parser combinators has very little code specific to "parsing" - it is mostly encoding the grammar in a coordination framework that calls on literal parsers at the bottom. Since the coordination framework uses some form of state transformers, exchanging the literal parsers with literal unparsers should turn the grammar parser into a grammar unparser (in essence, the non-terminal code is reusable, the terminal code is specific to the direction of data flow). Add switch-points (where the "mode" can switch from parsing to unparsing and back), and one has syntax-directed editors (here you need to be able to restart the process on arbitrary non-terminals), or expect-like protocol-driven computations (two or more agents with complementary views of which parts of the grammar involve parsing and which unparsing). The non-trivial parts I remember are to ensure that the unparser is directed by the AST (unless you want to generate random sentences from the whole language), just as the parser is directed by the input String, and not biasing the combinator framework towards parsing (which happens all too easily). But then, it has been a long time since I wrote such parser/unparsers (the first before Monads and do-notation became must-have aspects of combinator parsers, when we were free just to "make it work";-). It would be useful to have an overview of the issues that lead to the widespread view of this being non-trivial (other than in the narrow interpretation of "non-trivial" as "needs some code"). I've always wondered why there was so much focus on just parsing in combinator libraries. Just curious, Claus

On Oct 27, 2010, at 6:23 PM, Claus Reinke wrote:
I'm occasionally working on making a friendly yet performant library that simultaneously builds parsers and generators, but it's non-trivial. If you
I'm probably missing something in the "friendly yet performant" requirements, but I never quite understood the difficulty:
A typical translation of grammar to parser combinators has very little code specific to "parsing" - it is mostly encoding the grammar in a coordination framework that calls on literal parsers at the bottom. Since the coordination framework uses some form of state transformers, exchanging the literal parsers with literal unparsers should turn the grammar parser into a grammar unparser (in essence, the non-terminal code is reusable, the terminal code is specific to the direction of data flow).
On this topic, folks might be interested in the awesome "Invertible Syntax Descriptions" paper presented by Tillmann Rendel and Klaus Ostermann at this year's Haskell Symposium: http://www.informatik.uni-marburg.de/~rendel/unparse/ Cheers, Sterl
participants (5)
-
Christopher Done
-
Claus Reinke
-
Daniel Peebles
-
Günther Schmidt
-
Sterling Clover