
I agree with you. This example puts a nail on the coffin of the
backtracking approach.
I will have to think of something else, and at this point a full
rewrite to parser combinators does not seem as appealing.
Thanks!
On Tue, Oct 9, 2018 at 4:45 PM Richard Eisenberg
I think one problem is that we don't even have bounded levels of backtracking, because (with view patterns) you can put expressions into patterns.
Consider
f = do K x (z -> ...
Do we have a constructor pattern with a view pattern inside it? Or do we have an expression with a required visible type application and a function type? (This last bit will be possible only with visible dependent quantification in terms, but I'm confident that Vlad will appreciate the example.) We'll need nested backtracking to sort this disaster out -- especially if we have another `do` in the ...
What I'm trying to say here is that tracking the backtracking level in types doesn't seem like it will fly (tempting though it may be).
Richard
On Oct 9, 2018, at 7:08 AM, Vladislav Zavialov
wrote: It's a nice way to look at the problem, and we're facing the same issues as with insufficiently powerful type systems. LALR is the Go of parsing in this case :)
I'd rather write Python and have a larger test suite than deal with lack of generics in Go, if you allow me to take the analogy that far.
In fact, we do have a fair share of boilerplate in our current grammar due to lack of parametrisation. That's another issue that would be solved by parser combinators (or by a fancier parser generator, but I'm not aware of such one).
On Tue, Oct 9, 2018 at 1:52 PM Simon Peyton Jones
wrote: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects...
I’d never thought of it that way before – interesting.
Simon
From: ghc-devs
On Behalf Of Sven Panne Sent: 09 October 2018 08:23 To: vlad.z.4096@gmail.com Cc: GHC developers Subject: Re: Parser.y rewrite with parser combinators Am Di., 9. Okt. 2018 um 00:25 Uhr schrieb Vladislav Zavialov
: [...] That's true regardless of implementation technique, parsers are rather delicate.
I think it's not the parsers themselves which are delicate, it is the language that they should recognize.
A LALR-based parser generator does provide more information when it detects shift/reduce and reduce/reduce conflicts, but I never found this information useful. It was always quite the opposite of being helpful - an indication that a LALR parser could not handle my change and I had to look for workarounds. [...]
Not that this would help at this point, but: The conflicts reported by parser generators like Happy are *extremely* valuable, they hint at tricky/ambiguous points in the grammar, which in turn is a strong hint that the language you're trying to parse has dark corners. IMHO every language designer and e.g. everybody proposing a syntactic extension to GHC should try to fit this into a grammar for Happy *before* proposing that extension. If you get conflicts, it is a very strong hint that the language is hard to parse by *humans*, too, which is the most important thing to consider. Haskell already has tons of syntactic warts which can only be parsed by infinite lookahead, which is only a minor technical problem, but a major usablity problem. "Programs are meant to be read by humans and only incidentally for computers to execute." (D.E.K.) </rant> ;-)
The situation is a bit strange: We all love strong guarantees offered by type checking, but somehow most people shy away from "syntactic type checking" offered by parser generators. Parser combinators are the Python of parsing: Easy to use initially, but a maintenance hell in the long run for larger projects...
Cheers,
S.
_______________________________________________ ghc-devs mailing list ghc-devs@haskell.org http://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-devs