
This last piece of conversation was *so* reminiscent of a paper[1] I once
read, I was almost convinced it was late by 11 days...until I checked :)
Cheers,
Dinko
[1] http://www.research.att.com/~bs/whitespace98.pdf
On 4/12/07, Simon Marlow
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Isaac Dupree wrote:
Simon Marlow wrote:
I definitely think that -1# should be parsed as a single lexeme. Presumably it was easier at the time to do it the way it is, I don't remember exactly.
I'd support a warning for use of prefix negation, or alternatively you could implement the Haskell' proposal to remove prefix negation completely - treat the unary minus as part of a numeric literal in
Isaac Dupree wrote: the
lexer only. This would have to be optional for now, so that we can continue to support Haskell 98 of course.
Cheers, Simon Yes, I've been thinking about how to implement both - details will come later when I have more time. I think I have a reasonably working idea of how to divide up the cases for warnings for ambiguous-looking use of both infix and prefix minus, as well as actual syntax changes...
not considering warnings, just syntax: 123abc is two valid Haskell tokens. for example: \begin{code} main = (\n c -> print (n,c)) 123Abc data Abc = Abc deriving Show \end{code} prints (123,Abc). So does this suggest that under a negation-is-part-of-numeric-token regime, 123-456 should be two tokens (a positive number then a negative number, here), as is signum-456 ...
Yes, absolutely.
Presently, GHC doesn't even warn about the first thing (123abc) ^_^
and remember that while '123e 4' is 3 tokens, '123e4' is only 1.
Cheers, Simon _______________________________________________ Glasgow-haskell-users mailing list Glasgow-haskell-users@haskell.org http://www.haskell.org/mailman/listinfo/glasgow-haskell-users
-- Cheers, Dinko