
Huh, good to know. I just looked at how it is done, and we are literally
peeking at the characters surrounding tokens and running a bit of Haskell
to guess the classification of the previous/next token. Seems a bit
brittle but I guess this is not something that is likely to change. I am
also surprised it didn't slow down the lexer, but maybe it's lost in the
noise.
Anyway, I don't have a strong opinion on the proposal as this is an
extension I never use, but the proposed behavior seems more intuitive for
humans, so it seems reasonable to accept it.
Iavor
On Wed, Jul 22, 2020, 06:49 Richard Eisenberg
On Jul 22, 2020, at 1:43 PM, Joachim Breitner
wrote: Just as a note, I never quite understood how we plan to implement #229---is it supposed to be done through state in the lexer, or do we have a simple lexer that keeps all tokens, including white space, and then write a post-processing function to glue and rejigger tokens?
No idea here.
#229 is in fact already implemented: https://gitlab.haskell.org/ghc/ghc/-/merge_requests/1664 I don't remember the details to explain them, but I don't think it turned out hard to implement.
Richard _______________________________________________ ghc-steering-committee mailing list ghc-steering-committee@haskell.org https://mail.haskell.org/cgi-bin/mailman/listinfo/ghc-steering-committee