Automatic fixity allocation for symbolic operators

Hi - I'm wondering if it is possible to construct a methodical procedure to assign a fixity to symbolic operators so that we could get rid of the need for user defined fixites. User defined fixities are an enormous problem for an interactive editor, because it is not possible to construct a parse tree of some code if you don't know what the fixities are, and an editor must be able to do something useful with a code fragment without having to look inside other modules (because the other modules may not yet have even been written!). Also, the programmer or reader of code needs to be able to understand each code fragment independently as well.
From the Haskell98 report, the Prelude defines:
infixr 9 ., !! infixr 8 ^, ^^, ** infixl 7 *, /, `quot`, `rem`, `div`, `mod` infixl 6 +, - infixr 5 :, ++ infix 4 ==, /=, <, <=, >=, > infixr 3 && infixr 2 || infixl 1 >>, >>= infixr 1 =<< infixr 0 $, $!, `seq` Suppose we ignore the varid operators and just consider the symbolic ops. What I'm trying to find is a systematic way to assign fixities to all other possible sequences of symbol characters that is consistent with what we've already got in the Prelude. As a first step, we could say that "mirrored" operators must share the same precedence ie: =<< >>= < > For associativity, we could assign each character an associativity weight: -1 left associative 0 neutral 1 right associative and say that the associativity is simply the sign of the sum of the associativity weights of the characters thus: > -1 = 0 < 1 =<< 0 + 1 + 1 ie infixr Note that I don't care about non-associative ops since the non-associativity of something should be a question for the type checker not the parser imho - ideally we should be able to create a parse tree for any possible operator expression. To form the precedence, we could assign each character a precedence value eg: 9 . ! 8 ^ 7 * / 6 + - 5 : 4 = 3 & 2 | 1 < > 0 $ A first attempt might be to say that the precedence is simply the decimal expansion of the precedence values eg >>= has precedence 1.14 and $! has precedence 0.9. However, as noted above, mirrored ops must share the same precedence so an alternative is to create some ordering of characters such that when the set of characters in an operator is sorted according to this ordering, the decimal expansion of the precedence digits will give the same relative ordering for operator precedences as the Prelude. For example, using $ | & + - * / ^ . ! : < > = as the ordering, we'd get: infixr 9 . !! 9 9.9 infixr 8 ^, ^^, ** 8 8.8 7.7 infixl 7 *, /, 7 7 infixl 6 +, - 6 6 infixr 5 : , ++ 5 6.6 infix 4 ==, /=, <, <=, >=, > 4.4 7.4 1 1.4 1.4 1 infixr 3 && 3.3 infixr 2 || 2.2 infixl 1 >>, >>= 1.1 1.14 infixr 1 =<< 1.14 infixr 0 $, $! 0 0.9 Although most existing ops get a similar precedence (eg ^ ^^ and ** are all still in the same group relative to the other ops despite the fact that within the group there is now an ordering) the main problem seems to be that < <= >= > get precedences that bind too loosely and /= is totally wrong. This problem aside, the above algorithm would give sensible precedences for such things as: :> <: 5.1 <+> <*> 6.11 6.11 where the use of <> neutralizes the associativity contribution of < or > respectively (since (-1) + 1 == 0), giving us the intuitive associativity we'd expect from the "interesting" character in the middle. (The problem of /= getting 7.4 could be solved by putting / after = in the order, to get 4.7, but unfortunately this would mean that since <> must be before =, <> would be before / so > would get the wrong precedence compared to <*>) Another issue is that there is no assignment of associativity weights such that "*" is infixl but "**" is infixr (ditto + and ++) so perhaps we'd need to associate each character with an associativity function. Similar to precedences, we then define an associativity ordering and let the resulting associativity be the sign of the composition of the sorted functions applied to 1 eg: ^ const (-1) * \x -> x * (-1) = id & const (-1) < (+1) > (+ (-1)) Then * (\x -> x * (-1)) 1 === -1 ie left ** (\x -> x * (-1)) . (\x -> x * (-1)) $ 1 === +1 ie right >>= > > = (+ (-1)) . (+ (-1)) . id $ 1 === -1 <*> -- remember ordering * < > (\x -> x * (-1)) . (+1) . (+ (-1)) $ 1 === ((1 - 1) + 1) * (-1) === -1 ie left as required!!! :-) Anyway this is as far as I've got so far trying to rationally reconstruct the original Prelude precedences to achieve the golden aim of eliminating the infinite problem of fixity declarations from source code... :-) (Regarding `div` `seq` etc - I'd just assign them all the same precedence because use of multiple varid ops in the same expression with different precedences, or trying to combine them with symbolic ops, is just a highway to confusion city imho. Note also that (seq x $ exp) is not only clearer but is also one character shorter than (x `seq` exp)) The main open problem is finding an algorithm which assigns a good precedence to >>= as well as to >= and /= and > and <*>.... ;-) Any ideas? Thanks, Brian. -- www.metamilk.com

Brian Hulley wrote:
infixr 9 . !! 9 9.9 infixr 8 ^, ^^, ** 8 8.8 7.7 infixl 7 *, /, 7 7 infixl 6 +, - 6 6 infixr 5 : , ++ 5 6.6 infix 4 ==, /=, <, <=, >=, > 4.4 7.4 1 1.4 1.4 1 infixr 3 && 3.3 infixr 2 || 2.2 infixl 1 >>, >>= 1.1 1.14 infixr 1 =<< 1.14 infixr 0 $, $! 0 0.9
Ouch. Really, the first priority in the language should be human readability. Looking up a fixity isn't that hard, but remembering a rule like this is pretty awful. You also restrict the freedom of library designers for no good reason. Precedences aren't usually random ... As far as editors go I have little sympathy. I also see nothing wrong with forcing a coder to have at least a module stub that defines its interface and the operator precedences, to make the parsing work reliably. You'll have to deal with the case where parsing fails anyway, and it shouldn't be too hard between failures due to unsufficient information (which shouldn't be tagged as errors, but maybe give some other indication) and real errors. Bertram

Perhaps the editor could assume a default precedence when the
user-defined precedence is not yet available. Preferably, the editor
would also somehow yell at the user to indicate that it is making such
an assumption.
I think it's unreasonable to tie programmers' hands for the sake of
off-loading rather trivial tasks to editors.
Nick
On 10/14/06, Bertram Felgenhauer
Brian Hulley wrote:
infixr 9 . !! 9 9.9 infixr 8 ^, ^^, ** 8 8.8 7.7 infixl 7 *, /, 7 7 infixl 6 +, - 6 6 infixr 5 : , ++ 5 6.6 infix 4 ==, /=, <, <=, >=, > 4.4 7.4 1 1.4 1.4 1 infixr 3 && 3.3 infixr 2 || 2.2 infixl 1 >>, >>= 1.1 1.14 infixr 1 =<< 1.14 infixr 0 $, $! 0 0.9
Ouch.
Really, the first priority in the language should be human readability. Looking up a fixity isn't that hard, but remembering a rule like this is pretty awful. You also restrict the freedom of library designers for no good reason. Precedences aren't usually random ...
As far as editors go I have little sympathy. I also see nothing wrong with forcing a coder to have at least a module stub that defines its interface and the operator precedences, to make the parsing work reliably. You'll have to deal with the case where parsing fails anyway, and it shouldn't be too hard between failures due to unsufficient information (which shouldn't be tagged as errors, but maybe give some other indication) and real errors.
Bertram _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On 10/14/06, Nicolas Frisby
Perhaps the editor could assume a default precedence when the user-defined precedence is not yet available. Preferably, the editor would also somehow yell at the user to indicate that it is making such an assumption.
Perhaps it could even assume the fixity that is specified in the prelude for operators without fixity declarations, thus behaving exactly like the compiler would: "Any operator lacking a fixity declaration is assumed to be infixl 9" (4.4.2) I agree that changing the language in such an unintuitive way - breaking existing code in the process - to suit an editor is counterproductive and ridiculous. /g

On 10/14/06, Brian Hulley
User defined fixities are an enormous problem for an interactive editor
This is the second or third time you've proposed a language change based on the editor you're writing. I don't think this is a fruitful avenue. There are three ways to change Haskell's lexical structure: 1. DIY on an open-source compiler/interpreter of your choice. 2. Write your own compiler/interpreter. 3. Get the change into Haskell''. If the Haskell'' procedure is like the Haskell' procedure, you'll have to do 1 or 2 before you do 3. It's possible that you will convince someone that your syntax changes are worth doing, and that this person will do step 1 or step 2 for you, but I don't think so. I haven't done the research myself, but I think if you look at the source control logs for Your Favorite Haskell Compiler/interpreter and the HCAR, you will find very few commits/projects devoted to syntax. I think this is because Haskellers are much more interested in semantics. Proposing changes that break existing code or segment the Haskell code base just doesn't seem like a win. Jim

Jim Apple wrote:
On 10/14/06, Brian Hulley
wrote: User defined fixities are an enormous problem for an interactive editor
This is the second or third time you've proposed a language change based on the editor you're writing. I don't think this is a fruitful avenue.
Bertram Felgenhauer wrote:
Brian Hulley wrote:
infixr 9 . !! 9 9.9 Ouch.
As far as editors go I have little sympathy.
Nicolas Frisby wrote:
I think it's unreasonable to tie programmers' hands for the sake of off-loading rather trivial tasks to editors.
J. Garrett Morris wrote:
On 10/14/06, Nicolas Frisby
wrote: Perhaps the editor could assume a default precedence ... I agree that changing the language in such an unintuitive way - breaking existing code in the process - to suit an editor is counterproductive and ridiculous.
I feel that the example I used of an interactive editor has somewhat eclipsed the purpose of my post, which was basically to see if there is some way to arrive at an agreed association of operator precedences with symbolic ops such that this would include the current Prelude operators and have some kind of intuitive extension to all other possible symbolic ops which would respect one's expectations that <*> and <+> should be similar to * and + in terms of relative precedence and absolute associativity. After all, if a library writer decided to make <*> bind more loosely than <+> one might justifiably be frustrated at the apparent inconsistency therefore why not just make all this systematic and fixed down to remove these problems? I had thought perhaps someone might have already made such a "global" operator table suitable for functional programming, or some suitable algorithm hence the inclusion of my first stumbling attempt at one to try and set the ball rolling and at least save someone else several hours of hard work going down the same dead end if such it is. Jim Apple wrote:
There are three ways to change Haskell's lexical structure:
1. DIY on an open-source compiler/interpreter of your choice. 2. Write your own compiler/interpreter. 3. Get the change into Haskell''.
If the Haskell'' procedure is like the Haskell' procedure, you'll have to do 1 or 2 before you do 3.
It's possible that you will convince someone that your syntax changes are worth doing, and that this person will do step 1 or step 2 for you, but I don't think so. I haven't done the research myself, but I think if you look at the source control logs for Your Favorite Haskell Compiler/interpreter and the HCAR, you will find very few commits/projects devoted to syntax. I think this is because Haskellers are much more interested in semantics.
Afaiu the whole concept of type classes arose because of the desire to avoid having to think up different names for related functions and MPTCs were suggested by the syntax for single parameter TCs (though I can't remember the reference where I think I remember reading this latter point).
Proposing changes that break existing code or segment the Haskell code base just doesn't seem like a win.
Since all syntactic and many semantic changes necessarily break existing code or segment the code base this would seem to imply that the language should not be changed at all or that everything must be backwards compatible, so that we have to drag the Achilles heel of original mistakes around for all eternity. I'd have thought a solution would be to just use a tool to automatically convert existing code to whatever new syntax is found to be better, thus allowing the language to be gradually perfected as more and more issues are brought to light. Still I agree with you that a more sensible approach in terms of someone like me writing an editor is simply for me to take Haskell as an inspiration and incorporate my ideas in it so that any changes can later be seen in relation to a (hopefully facilitated and enjoyable) programming experience rather than trying to share my ideas piecemeal and insufficiently contextualized as they arise. Thanks to all who replied, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On Sat, 14 Oct 2006, Jim Apple wrote:
On 10/14/06, Brian Hulley
wrote: User defined fixities are an enormous problem for an interactive editor
This is the second or third time you've proposed a language change based on the editor you're writing. I don't think this is a fruitful avenue.
I assume that editor developers, compiler writers and language tool writers (documentation extraction, source code formatting) get much more insight into syntactic issues than other users. Finding some problem when implementing one of these tools often reveals weak points in the language syntax. I repeat my example of a source code formatting tool which must decide whether to format a + b * c or a + b * c It needs to know the precedences of the used operators, which, as Brian pointed out, is possibly not even defined somewhere. Alternatively consider a compiler which must have a parser that must adapt the grammar to the module contents (infix statements) in order to parse the module correctly. Even worse: The same symbol can have different precedences, because infix operators can be declared locally. The same problem for a human: In order to analyse the meaning of an expression with infix operators he must know the precedences. That is, problems with infix operators are by far not bound to a text editor! You may argue that difficult language syntaxes like that of C++ push the parser technique forward. However this seems to me like Windows pushes memory development. Concerning the automatic precedence assigment according to the characters in an infix operator, I think that it is difficult to find a reasonable algorithm, because that algorithm would also limit the kind of operator schemes that can be used. If the operations can be associated with basic mathematical operations like +, *, =, then it would work, but what about different structures? How would you translate lattice operations "up" and "down", how operations like "parallel" and "serial" composition? Summed up, I think infix handling must be somehow improved, but I don't know how.

Hello, Henning Thielemann wrote:
[...] I repeat my example of a source code formatting tool which must decide whether to format
a + b * c
or
a + b * c
It needs to know the precedences of the used operators, which, as Brian pointed out, is possibly not even defined somewhere. Alternatively consider a compiler which must have a parser that must adapt the grammar to the module contents (infix statements) in order to parse the module correctly. Even worse: The same symbol can have different precedences, because infix operators can be declared locally.
I would suggest to first parse modulo fixity and precedence (keep all operator applications in flat lists) and only later, if/when precedence and fixity information is available, parse those operator lists.
The same problem for a human: In order to analyse the meaning of an expression with infix operators he must know the precedences. That is, problems with infix operators are by far not bound to a text editor!
Quite so. I wouldn't consider it a problem though. The meaning of the expression also depends on the meaning (definition) of the operators involved, and of other functions used in the expression. That information may also be located elsewhere or even not yet available, just as precedence declarations. Locality has been wittingly sacrificed for flexibility. Regards, Arie -- Mr. Pelican Shit may be Willy. ^ /e\ ---

Regarding latticess and locality...
This idea probably won't help with editors, but the OP's question has
sparked a discussion here and some thinking in my head--thanks Brian.
What if operator precedences were specified as a partial order instead
of using numbers? Using numbers implies a potentially deceptive sense
of completeness: "well I've given @+@ a precedence 5 and let that be
written in stone forever so that all conflicts are resolved
henceforth."
Most fixities I've dealt with are put into play only amongst related
operators in a project (@+@ or @*@ in MySpecialLib) or amongst
operators from a related library. If the syntax were like:
infixr @+@
infixr @+@
prec @+@ > @*@
then the programmer gets to specify as much as they have decided and no more.
5 @+@ 3 * 7 would simply be under defined and would require parens.
Perhaps Brian's original idea of systematically determining
unspecified operator precedences could be recast in this system.
Consider (woefully under contemplated) precedence specifiers such as:
precInherit <*> -> @*@
precAll ?+? > ?*?
Regarding precAll: I'm not a regular expressions/glob for semantics
fan, but you get the idea.
The idea is to define a partial order on operators and let undecided
operator relationships remain undefined. Composition remains an open
issue, but perhaps someone else will have a light bulb about that.
Nick
On 10/16/06, Arie Peterson
Hello,
Henning Thielemann wrote:
[...] I repeat my example of a source code formatting tool which must decide whether to format
a + b * c
or
a + b * c
It needs to know the precedences of the used operators, which, as Brian pointed out, is possibly not even defined somewhere. Alternatively consider a compiler which must have a parser that must adapt the grammar to the module contents (infix statements) in order to parse the module correctly. Even worse: The same symbol can have different precedences, because infix operators can be declared locally.
I would suggest to first parse modulo fixity and precedence (keep all operator applications in flat lists) and only later, if/when precedence and fixity information is available, parse those operator lists.
The same problem for a human: In order to analyse the meaning of an expression with infix operators he must know the precedences. That is, problems with infix operators are by far not bound to a text editor!
Quite so. I wouldn't consider it a problem though. The meaning of the expression also depends on the meaning (definition) of the operators involved, and of other functions used in the expression. That information may also be located elsewhere or even not yet available, just as precedence declarations. Locality has been wittingly sacrificed for flexibility.
Regards,
Arie
--
Mr. Pelican Shit may be Willy.
^ /e\ ---
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

On Mon, 16 Oct 2006, Nicolas Frisby wrote:
Regarding latticess and locality...
This idea probably won't help with editors, but the OP's question has sparked a discussion here and some thinking in my head--thanks Brian.
What if operator precedences were specified as a partial order instead of using numbers? Using numbers implies a potentially deceptive sense of completeness: "well I've given @+@ a precedence 5 and let that be written in stone forever so that all conflicts are resolved henceforth."
Most fixities I've dealt with are put into play only amongst related operators in a project (@+@ or @*@ in MySpecialLib) or amongst operators from a related library. If the syntax were like:
infixr @+@ infixr @+@ prec @+@ > @*@
dict.leo.org says: "great minds think alike" http://www.haskell.org/pipermail/haskell-cafe/2005-February/009260.html

Nicolas Frisby wrote:
What if operator precedences were specified as a partial order instead of using numbers?
Henning Thielemann wrote:
dict.leo.org says: "great minds think alike"
Funny, I thought of this too. It seems very natural. You would probably want an implicit taking of transitive closure, to reduce the needed number of declarations. However, to consistently parse an expression, the precedence relation does not need to be transitive (right? one only needs to compare the precedence of adjacent operators), so you could even allow cycles in the precedence graph :s - not sure if that would ever be useful.
Perhaps Brian's original idea of systematically determining unspecified operator precedences could be recast in this system. Consider (woefully under contemplated) precedence specifiers such as:
precInherit <*> -> @*@ precAll ?+? > ?*?
Regarding precAll: I'm not a regular expressions/glob for semantics fan, but you get the idea.
I'm not convinced that it would be helpful to attach some special meaning to the "layout" of the operator symbol. -- Mr. Pelican Shit may be Willy. ^ /e\ ---

"Nicolas Frisby"
What if operator precedences were specified as a partial order instead of using numbers?
I suggested something that did that to fplangc back in 1987... Thu, 19 Nov 87 17:49:50 GMT in fact! Simon PJ later forwarded a message from Stef Joosten to similar effect... I made a more concrete proposal later and Phil Wadler tidied it up. I think It even got as far as a draft of the language, but I think it was decided that it was just too difficult (both for human and computer) to parse. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk http://www.chaos.org.uk/~jf/Stuff-I-dont-want.html (updated 2006-09-13)

On Mon, 16 Oct 2006, Jón Fairbairn
I made a more concrete proposal later and Phil Wadler tidied it up. I think It even got as far as a draft of the language, [...]
Do you know where this proposal/draft can be found? -- /NAD

Nils Anders Danielsson
On Mon, 16 Oct 2006, Jón Fairbairn
wrote: I made a more concrete proposal later and Phil Wadler tidied it up. I think It even got as far as a draft of the language, [...]
Do you know where this proposal/draft can be found?
On the fplangc mailing list. Thomas Johnsson has the archive, but as it was a closed mailing list I don't know if it's OK to publish the URL. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

Hello Nicolas, Monday, October 16, 2006, 6:31:42 PM, you wrote:
What if operator precedences were specified as a partial order instead of using numbers?
precInherit <*> -> @*@ precAll ?+? > ?*?
Regarding precAll: I'm not a regular expressions/glob for semantics fan, but you get the idea.
The idea is to define a partial order on operators and let undecided operator relationships remain undefined. Composition remains an open issue, but perhaps someone else will have a light bulb about that.
well, it is what typically done when you define expression parsers by hand (for any language that had fixed precedences). smth like this: expr1 ::= expr2 | expr1 + expr2 | expr1 - expr2 expr2 ::= expr3 | expr2 * expr3 | expr2 / expr3 expr3 ::= ... but when you want to have user-defined operators, that will mean that you need either to define precedences to all other operators (including those from other libs), or sometimes user programs will not compile because they used combination of operators with undefined precedence good for making good headache :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Good evening, Bulat Ziganshin wrote:
but when you want to have user-defined operators, that will mean that you need either to define precedences to all other operators (including those from other libs), or sometimes user programs will not compile because they used combination of operators with undefined precedence
good for making good headache :)
Why is that? A library would indeed only declare the relative precedence of its operators with respect to operators that 1) it knows of; and 2) are related (or general) enough so that there is a reasonable choice of precedence. I think it is even good to force the user to declare any other, more uncommon, precedences; better than the current situation, where the relative precedence of operators from unrelated libraries is fixed pretty much arbitrarily, as an artefact of the imposed total order. Regards, Arie -- Mr. Pelican Shit may be Willy. ^ /e\ ---

I would imagine (reading into Jon Fairbairn's note) that the
difficulty is in combining it with the traditional handling of
precedences in parsing systems, as Bulat was describing. AFAIK, which
is not much on this topic, the notion of precedence in traditional LR
spewers is strictly tied to numeric precedences that are known pretty
much a priori.
Since mapping all the way to numbers seems like overkill to resolve
such infix ambiguities, I'd expect such an adjustment to parser
generators wouldn't be horrific--it may even be more natural on the
implementation side.
Nick
On 10/16/06, Arie Peterson
Good evening,
Bulat Ziganshin wrote:
but when you want to have user-defined operators, that will mean that you need either to define precedences to all other operators (including those from other libs), or sometimes user programs will not compile because they used combination of operators with undefined precedence
good for making good headache :)
Why is that?
A library would indeed only declare the relative precedence of its operators with respect to operators that 1) it knows of; and 2) are related (or general) enough so that there is a reasonable choice of precedence. I think it is even good to force the user to declare any other, more uncommon, precedences; better than the current situation, where the relative precedence of operators from unrelated libraries is fixed pretty much arbitrarily, as an artefact of the imposed total order.
Regards,
Arie
--
Mr. Pelican Shit may be Willy.
^ /e\ ---
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Here's an idea that (I think) is useful and backwards compatible: fractional and negative fixity. There have been 3 separate times where I've wanted an operator just above 0 ($) but less than 1 (>>= or >>>), or else just below 0 (like a superlow $$) infix 0.5 ??? infix -1 $$ The only change would be internal to compiler, wouldn't it? Since fixity is just syntactic sugar, there should be no semantic difficulties. Dan

Hello Dan, Saturday, November 4, 2006, 5:07:15 AM, you wrote:
Here's an idea that (I think) is useful and backwards compatible: fractional and negative fixity.
yes, i think the same. for example, once i've tried to define postfix 'when' operator like those in perl/ruby print msg `on` mode==debug but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence really, there is another alternative to solve my particular problem: make `op` applications having fixed -1 precedence. such applications look "heavyweight" and once i have a wonderful debugging story just because for my eyes it was obvious that (a `div` b+1) means "do add before div" -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Sat, 4 Nov 2006, Bulat Ziganshin wrote:
Hello Dan,
Saturday, November 4, 2006, 5:07:15 AM, you wrote:
Here's an idea that (I think) is useful and backwards compatible: fractional and negative fixity.
yes, i think the same. for example, once i've tried to define postfix 'when' operator like those in perl/ruby
print msg `on` mode==debug
but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence
This could be solved by the solutions proposed in this thread: http://www.haskell.org/pipermail/haskell-cafe/2006-October/018923.html

Hello Henning, Monday, November 6, 2006, 1:27:54 PM, you wrote:
print msg `on` mode==debug
but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence
This could be solved by the solutions proposed in this thread:
http://www.haskell.org/pipermail/haskell-cafe/2006-October/018923.html
it's too complex for my purposes. -1 priority is enough -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Mon, 6 Nov 2006, Bulat Ziganshin wrote:
Hello Henning,
Monday, November 6, 2006, 1:27:54 PM, you wrote:
print msg `on` mode==debug
but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence
This could be solved by the solutions proposed in this thread:
http://www.haskell.org/pipermail/haskell-cafe/2006-October/018923.html
it's too complex for my purposes. -1 priority is enough
This reminds me on good old BASIC programming days, where we numbered lines in steps of 10, in order to be able insert lines later. Unfortunately, BASIC never supported negative nor fractional line numbers. :-)

But DEC's language FOCAL had fractional line numbers. :) On Nov 7, 2006, at 06:00 , Henning Thielemann wrote:
On Mon, 6 Nov 2006, Bulat Ziganshin wrote:
Hello Henning,
Monday, November 6, 2006, 1:27:54 PM, you wrote:
print msg `on` mode==debug
but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence
This could be solved by the solutions proposed in this thread:
http://www.haskell.org/pipermail/haskell-cafe/2006-October/ 018923.html
it's too complex for my purposes. -1 priority is enough
This reminds me on good old BASIC programming days, where we numbered lines in steps of 10, in order to be able insert lines later. Unfortunately, BASIC never supported negative nor fractional line numbers. :-) _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

I don't see how it's too complex. Isn't
infixl ??
prec ?? < $
(??) = whenOperator
exactly what you want to say?
Sure you can solve the problem with negative fixities, but that's less
expressive than the above (the total order is actually an
over-specification). You want ?? to bind more tightly than does $;
that's exactly what this approach would let you specify.
When conjuring a number less than 0, what makes -1 a more appropriate
choice than -2? There's really no answer to that question. Fractional
fixities make it even worse.
Nick
On 11/6/06, Bulat Ziganshin
Hello Henning,
Monday, November 6, 2006, 1:27:54 PM, you wrote:
print msg `on` mode==debug
but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence
This could be solved by the solutions proposed in this thread:
http://www.haskell.org/pipermail/haskell-cafe/2006-October/018923.html
it's too complex for my purposes. -1 priority is enough
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Hello Nicolas, Wednesday, November 8, 2006, 1:25:23 AM, you wrote:
prec ?? < $ over-specification). You want ?? to bind more tightly than does $; that's exactly what this approach would let you specify.
and how then compiler will guess that is relational priority of this operator comparing to '$!' ? :) -- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

On Wed, 8 Nov 2006, Bulat Ziganshin wrote:
Hello Nicolas,
Wednesday, November 8, 2006, 1:25:23 AM, you wrote:
prec ?? < $ over-specification). You want ?? to bind more tightly than does $; that's exactly what this approach would let you specify.
and how then compiler will guess that is relational priority of this operator comparing to '$!' ? :)
(What might the smiley mean?) It could not guess it, and this is good! However, if in the Prelude it is defined, that ($) and ($!) have the same precedence, then the compiler could derive automatically that prec ?? < $!.

Well let's see. First I'll assume that
prec $! = $
is how $! was specified. Thus we know both ?? < $ and $! = $. Let's
derive the relation between ?? and $!
?? < $
=> ?? < $! {$ = $!}
So I think that is pretty straight-forward. ":)" is a parse error... ;)
This does bring up the interesting case where we want an operator
between $ and $! (or some less offensive pair of operators with equal
precedence). This, like Danielsson's later post, is a case that
deserves some thought. If we handle such cases in a consistent way, I
think we might have something quite useful.
On the positive side, when I want $! to behave like $ (or perhaps more
appropriately =*= to behave like ==) then I don't have to lookup the
numeric precedence of $ (or ==). I can just say
prec $! = $
and be done with it. There's no arbitrary middle man.
Nick
On 11/8/06, Bulat Ziganshin
Hello Nicolas,
Wednesday, November 8, 2006, 1:25:23 AM, you wrote:
prec ?? < $ over-specification). You want ?? to bind more tightly than does $; that's exactly what this approach would let you specify.
and how then compiler will guess that is relational priority of this operator comparing to '$!' ? :)
-- Best regards, Bulat mailto:Bulat.Ziganshin@gmail.com

Nicolas Frisby wrote:
I don't see how it's too complex. Isn't
infixl ?? prec ?? < $ (??) = whenOperator
exactly what you want to say?
Yes. I'd add that the system should (of course) take the transitive closure over all explicitly stated precedence relations. That really cuts down the number of cases one needs to specify. For instance, we'll want the Prelude operators to retain their relative precedences, so Prelude will contain (using a slightly changed syntax): prec ($ $! seq) < (>> >>= =<<) < (||) < (&&) < (== /= < <= >= >) < (:) prec (:) < (+ -) < (* / quot rem div mod) < (^ ^^ **) < (.) (Syntax summary: * ops with equal precedence are grouped by parentheses which are mandatory even if they contain only a single operator * precedence relations may use '<' or '>' between equality groups * no commas between operators are necessary if they are seperated by whitespace * backticks for operators with function names can be left out) Fixity would have to be declared separately as (using 'infixl' or 'infixr'; or whatever). Whenever we would currently declare an operator as having precedence N, we'd now declare it to have precedence equal to one or more operators from the corresponding precedence group, e.g. prec (<+> +) or, if you like prec (<+> + -) However prec (<+> + *) would be an error because (*) and (+) have previously been declared to have different precedence. Compilers can support a special command line switch "--dump-precedences") (and interpreters an interactive command) to display the precedences of all operators in scope in some module. (The latter would be quite a useful addition even with the current numerical precedence system). The more I think about it the more I like the idea. Cheers, Ben

haskell-prime-bounces@haskell.org wrote:
Hello Dan,
Saturday, November 4, 2006, 5:07:15 AM, you wrote:
Here's an idea that (I think) is useful and backwards compatible: fractional and negative fixity.
yes, i think the same. for example, once i've tried to define postfix 'when' operator like those in perl/ruby
print msg `on` mode==debug
but failed because my code frequently contains '$' and there is no way to define operation with a lower precedence
really, there is another alternative to solve my particular problem: make `op` applications having fixed -1 precedence. such applications look "heavyweight" and once i have a wonderful debugging story just because for my eyes it was obvious that (a `div` b+1) means "do add before div"
I'd support fractional and negative fixity. It's a simple change to make, but we also have to adopt http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/FixityResolut... I've added the proposal to the end of that page. In fact, the page already mentioned that we could generalise fixity levels, but it didn't mention fractional or negative values being allowed. Cheers, Simon

On Tue, 7 Nov 2006, Simon Marlow wrote:
I'd support fractional and negative fixity. It's a simple change to make, but we also have to adopt
http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/FixityResolut...
I've added the proposal to the end of that page. In fact, the page already mentioned that we could generalise fixity levels, but it didn't mention fractional or negative values being allowed.
Maybe that page could also mention earlier proposals and the solutions without precedence numbers. I prefer the non-numeric approach with rules like "(<) binds more tightly than (&&)", because it says what is intended and it allows to make things unrelated that are unrelated, e.g. infix operators from different libraries. Consequently a precedence relation to general infix operators like ($) and (.) had be defined in each library.

Henning Thielemann wrote:
On Tue, 7 Nov 2006, Simon Marlow wrote:
I'd support fractional and negative fixity. It's a simple change to make, but we also have to adopt
http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki /FixityResolution
I've added the proposal to the end of that page. In fact, the page already mentioned that we could generalise fixity levels, but it didn't mention fractional or negative values being allowed.
Maybe that page could also mention earlier proposals and the solutions without precedence numbers. I prefer the non-numeric approach with rules like "(<) binds more tightly than (&&)", because it says what is intended and it allows to make things unrelated that are unrelated, e.g. infix operators from different libraries.
This is a much more heavyweight change, and its not a clear win. Yes I agree that in some ways it's strange to enforce a total order between unrelated things, but on the other hand it's very convenient, and easy to understand. Currently the default fixity is infixl 9. That is, if you don't declare a fixity, you automatically get a fixity that can be used relative to every other operator. This is quite useful - I bet if we made it mandatory to declare all the relative fixities then we'd need to add fixity declarations to lots of code. I forsee this being quite tiresome. If you'd like to make a concrete proposal, then feel free to do so and I'll make sure it gets onto the wiki. Cheers, Simon

On Tue, 7 Nov 2006, Simon Marlow wrote:
This is a much more heavyweight change, and its not a clear win.
Haskell 2 ? :-)
If you'd like to make a concrete proposal, then feel free to do so and I'll make sure it gets onto the wiki.
What about the one of Jón Fairbairn ? http://www.haskell.org/pipermail/haskell-cafe/2006-October/018925.html

On 2006-11-07 at 18:30+0100 Henning Thielemann wrote:
On Tue, 7 Nov 2006, Simon Marlow wrote:
This is a much more heavyweight change, and its not a clear win.
Haskell 2 ? :-)
If you'd like to make a concrete proposal, then feel free to do so and I'll make sure it gets onto the wiki.
What about the one of Jón Fairbairn ? http://www.haskell.org/pipermail/haskell-cafe/2006-October/018925.html
The proposal to which that message refers was made before Haskell really existed: we didn't know the concrete syntax of the language, so I used arbitrary keywords. Even if we look at Phil Wadler's subsequent tidying up of it, I'm not at all sure how it fits with the language as it stands now. So while it might make a reasonable starting point, I don't think it yet counts as a concrete proposal! I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack. Jón -- Jón Fairbairn Jon.Fairbairn at cl.cam.ac.uk

On 07/11/06, Jon Fairbairn
I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack.
Well, It's a _conceptually_ simple idea, one that doesn't make understanding the language much harder. Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. It would allow for extensibility into the future, where the operator space will only become more dense, and maintaining a complete order with only 10 integers to play will become more and more difficult. Allowing an infinite amount of operators to come between any two operators sounds like a solid design decision to me. -- -David House, dmhouse@gmail.com

Let's remember that if something is broke, it's only _right_ to _fix_
it. I patiently waited for someone else to make that pun.
Understanding the language won't be much harder, but understanding
fixity declarations will become a task. Consider:
infixl -1.7521 -- what and why?
As the operator space becomes more dense, negative and fractional
fixities are going to become more obfuscated. The negative and
fractional fixities will satisfy a number purposes well, but they will
also be abused and lead to confusion.
This smells like a wart growing on a wart to me.
Nick
On 11/7/06, David House
On 07/11/06, Jon Fairbairn
wrote: I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack.
Well, It's a _conceptually_ simple idea, one that doesn't make understanding the language much harder.
Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. It would allow for extensibility into the future, where the operator space will only become more dense, and maintaining a complete order with only 10 integers to play will become more and more difficult. Allowing an infinite amount of operators to come between any two operators sounds like a solid design decision to me.
-- -David House, dmhouse@gmail.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Nicolas Frisby wrote:
Let's remember that if something is broke, it's only _right_ to _fix_ it. I patiently waited for someone else to make that pun.
Understanding the language won't be much harder, but understanding fixity declarations will become a task. Consider:
infixl -1.7521 -- what and why?
As the operator space becomes more dense, negative and fractional fixities are going to become more obfuscated. The negative and fractional fixities will satisfy a number purposes well, but they will also be abused and lead to confusion.
This smells like a wart growing on a wart to me.
All these are valid points. However, given that we can't completely redesign, implement and test a new fixity system in time for Haskell', it makes sense to make a simple change that unambiguously improves the current system, and is no more difficult to implement (in fact, I bet it adds zero lines of code to the compiler). Cheers, Simon
Nick
On 11/7/06, David House
wrote: On 07/11/06, Jon Fairbairn
wrote: I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack.
Well, It's a _conceptually_ simple idea, one that doesn't make understanding the language much harder.
Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. It would allow for extensibility into the future, where the operator space will only become more dense, and maintaining a complete order with only 10 integers to play will become more and more difficult. Allowing an infinite amount of operators to come between any two operators sounds like a solid design decision to me.
-- -David House, dmhouse@gmail.com _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Simon Marlow
Nicolas Frisby wrote:
Let's remember that if something is broke, it's only _right_ to _fix_ it. I patiently waited for someone else to make that pun.
Understanding the language won't be much harder, but understanding fixity declarations will become a task. Consider:
infixl -1.7521 -- what and why?
As the operator space becomes more dense, negative and fractional fixities are going to become more obfuscated. The negative and fractional fixities will satisfy a number purposes well, but they will also be abused and lead to confusion.
This smells like a wart growing on a wart to me.
All these are valid points. However, given that we can't completely redesign, implement and test a new fixity system in time for Haskell',
...the correct thing to do is to leave it alone, rather than make a change that addresses only one of the problems.
it makes sense to make a simple change that unambiguously improves the current system,
I dispute that. It does make it possible to introduce a new operator between two others, but on its own, that strikes me as as likely to be a new problem as an improvement because of the difficulty of remembering numeric precedences. It's bad enough with the present number, let alone a countable infinity of them. The biggest flaw in the present system (and something I wanted to address in my original proposal way back when) is that there is no way to state that there is /no/ precedence relationship between two operators. It would be far better to have the compiler give an error message saying that an expression needs some parentheses than have it choose the wrong parse. The next smaller flaw is that numeric precedences are a poor match for the way we think. I can easily remember that (*) binds more tightly than (+), or that (+) beats (:) (though the latter is slightly less obviously correct), but I don't remember the numbers so when I want to define something new that has a similar precedence to (*) (some new kind of multiplication), I have to look it up, which is tedious. Wanting to insert an operator between two others comes lower in importance even than that, because in many cases giving it the same precedence as something and appropriate associativity gets you most of the way there. It bites because you can't say you want an error if you use it next to something else without parentheses. Let me throw out a couple of possibilities differing only in syntax (one of my fears is that if we get fractional fixities the other problems will be forgotten, so a real improvement will never be discussed). I don't expect either of them to go into Haskell', but putting them forward might encourage further discussion and discourage introduction of something "temporary" that will stay with us forever. Syntax 1, based on Phil Wadler's improvement of my old proposal. The precedence relation is a preorder. infix {ops_1; ops_2; ...; ops_n} (where each ops is a collection of operators optionally annotated with L or R) would mean that each operator in ops_i binds more tightly than all the operators in ops_j for j>i. (and we require ops_i `intersect` ops_j = empty_set for i/=j) Layout rule applies for {;...}. An op can be a varsym or a backquoted varid. It says nothing about the relationship between the those operators and operators not mentioned, except by virtue of transitivity. So infix R ^ L * / L + - would replicate the current relationships between those arithmetic operators. An additional declaration infix + R : says that (+) binds more tightly than (:) and by transitivity, so do (^ * and /). The associativity label obviously has to be the same for all occurrences of an operator in scope, so omitting it just says that it's specified elsewhere. infix * R @+ @- + says that (@+) and (@-) fall between (*) and (-), and that (a @+ b @- c) parses as (a @+ (b@-c)) but infix * R @@ says that (a * b @@ c @@ d) parses as ((a*b) @@ (c@@d)) but leaves (a + b @@ c) undefined (a compile time error) unless another declaration specifies it elsewhere. And infix R @@ @@@ says nothing about the relationship between @@ or @@@ and other operators, but indicates that they associate to the right individually and together. The alternative syntax is exemplified thus: infix L + - (L * / (R ^)) The precedence increases the more deeply you go into the parentheses. Arguably this is more suggestive and avoids the possibility of reading precedences as increasing down the page (danger of endianism argument cropping up there!), but may be harder to read. With both syntaxes there's no reason to reserve L and R, since the context disambiguates. For exports (imports) you pass the graph of the relation with the unexported (unimported) operators omitted.
and is no more difficult to implement (in fact, I bet it adds zero lines of code to the compiler).
If ease of implementation had been a criterion, we'd never have had Haskell at all! I don't think the above suggestion would be hard to implement -- for someone who knows the internals of a compiler, anyway. Jón

Jón Fairbairn wrote:
Syntax 1, based on Phil Wadler's improvement of my old proposal. The precedence relation is a preorder.[...]
infix {ops_1; ops_2; ...; ops_n}
The alternative syntax is exemplified thus:
infix L + - (L * / (R ^))
[...]
I think both ways (I like the second one more) are a lot better than precedence numbers, whether they be fractional or integral. Let us add compiler/interpreter support for querying known precedence/fixity relations and it's (almost) perfect. Cheers Ben

David House schrieb:
Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. It would allow for extensibility into the future, where the operator space will only become more dense, and maintaining a complete order with only 10 integers to play will become more and more difficult. Allowing an infinite amount of operators to come between any two operators sounds like a solid design decision to me.
Yes, but allowing simply to specify some ordering relationship to existing operators is an even more solid one. Fractional fixities are overspecification, and this can hurt in scenarios like this one: Developer A creates an operator with this fixity declaration: infixl 6.25 +* Developer B has this: infixl 6.75 *+ (They don't use 6.5 because each has another operator at 6.5 already.) Now when some developer mixes +* and *+ in the same expression, the compiler will automatically assign a relative priority for the two operators, even though it's not at all clear whether the two operators have any relative precedence - it would be far preferable if the compiler simply declared nonpriority and emitted an error, forcing the programmer to clearly state what priorities he had in mind when writing down the expression. I know the above example is a bit far-fetched. And it's not a really important issue anyway. Regards, Jo

David House wrote:
Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it. It would allow for extensibility into the future, where the operator space will only become more dense, and maintaining a complete order with only 10 integers to play will become more and more difficult. Allowing an infinite amount of operators to come between any two operators sounds like a solid design decision to me.
For me, the most important objection to specifying precedence by an embedding into some total order -- be it the integers or the rationals -- is that it forces you to make a global decision, and that threatens modularity. Consider the following scenario: - Two unrelated pieces of code each define an operator (`A` resp. `B`). The writers choose their precedences with due regard to relations to basic operators and maybe other relevant ones, but *not considering their mutual relation*. The last writer may not know about the code of the first one, or did not bother to consider the possibility of mixing it with his code (it is unreasonable to expect a writer to consider the relation to every other existing operator). - At some point, `A` and `B` meet; maybe they are part of two unrelated libraries and a programmer uses them both, or maybe `A` was defined by a programmer who just found out that there exists this wonderful library exporting `B`. - Now, if she is lucky, the relative precedence of `A` and `B` just happens to be nice and logical. If she is unlucky and the relative precedence is really annoying, the precedence of at least one of `A` and `B` will have to change to fix this. However, if you change the precedence of `B`, say, (which currently is "written in stone forever", as Nicolas Frisby put it,) and `B` is an operator of any significance, then you have to send out an announcement to everyone who uses `B`, telling them that its precedence has changed, so they should check if that breaks any of their code. If it does, they may have to change the precedence of other operators, propagating a horrible cascade of realignment. That's a whole yotta words for a small bikeshed, but I think this scenario is actually not far-fetched at all. Specifying precedence 'lazily', by a partial order, does not suffer from this problem, because it only requires you to make local decisions. Kind regards, Arie -- Mr. Pelican Shit may be Willy. ^ /e\ ---

On Wed, 08 Nov 2006, "Arie Peterson"
Specifying precedence 'lazily', by a partial order, does not suffer from this problem, because it only requires you to make local decisions.
Assuming we only want to be able to make local decisions. Let's say that we want == to bind less tightly than +, as it is now. Let's also say that Eq and Num are defined in two different _unrelated_ modules (this of course implies that Eq is not a superclass of Num). Where and how would we specify the relation between these two operators? -- /NAD

On Wed, 8 Nov 2006, Nils Anders Danielsson wrote:
On Wed, 08 Nov 2006, "Arie Peterson"
wrote: Specifying precedence 'lazily', by a partial order, does not suffer from this problem, because it only requires you to make local decisions.
Assuming we only want to be able to make local decisions.
Let's say that we want == to bind less tightly than +, as it is now. Let's also say that Eq and Num are defined in two different _unrelated_ modules (this of course implies that Eq is not a superclass of Num). Where and how would we specify the relation between these two operators?
Depends on what we consider being more special. * Does addition require comparison? Not in Haskell. However, the recursive implementation of addition for Peano numbers need equality check. * Provide all additive types comparison? No. * Does comparison require addition? No. So, these concepts seem to be unrelated at first. If the Prelude would be splitted into modules, where (==) and (+) are separated, and no module imports the other one, then we need a third module, which states the relation between (==) and (+). If I have missed something, and say (+)'s module imports (==)'s one, then (+)'s module should define the relation.

On Wed, 08 Nov 2006, Henning Thielemann
If the Prelude would be splitted into modules, where (==) and (+) are separated, and no module imports the other one, then we need a third module, which states the relation between (==) and (+).
Yes, presumably. However, currently a fixity declaration for an operator can only be given in the module where the operator is defined. In this hypothetical new system, how would one import/export fixity declarations? Should they be named, or should they be treated like instance declarations are treated today? I've thought about this before, since I like the idea of not totally ordering operator precedences, but I haven't found an elegant and light-weight solution to this problem. -- /NAD

On Wed, 8 Nov 2006, Nils Anders Danielsson wrote:
On Wed, 08 Nov 2006, Henning Thielemann
wrote: If the Prelude would be splitted into modules, where (==) and (+) are separated, and no module imports the other one, then we need a third module, which states the relation between (==) and (+).
Yes, presumably. However, currently a fixity declaration for an operator can only be given in the module where the operator is defined. In this hypothetical new system, how would one import/export fixity declarations? Should they be named, or should they be treated like instance declarations are treated today?
Good question. Today fixity can only be declared per symbol. Thus it is natural to allow it only in the scope, where the symbol is defined. If fixity definitions consists of relations, then there are multiple symbols involved per relation. Is it sensible to restrict the definition to a module where one of the involved symbols is defined? Will people define non-exported dummy symbols in order to define fixities somewhere else? Maybe making fixity declarations like type class instance declarations is good. Is it good that instance declarations are anonymous, at all? There can be instance clashes, say if several people feel that Prelude misses Num instances for Strings, Boolean instances for Int and so on. I can't hide them, if I find them more dangerous than helpful. Similar problems apply to fixities. Maybe explicit fixity import or redeclaration, as proposed recently http://www.haskell.org/pipermail/haskell-cafe/2006-November/019364.html is a good idea.

Henning Thielemann wrote:
Maybe making fixity declarations like type class instance declarations is good.
I thought so too at first but, having thought about it for a while, I now see this will cause /major/ problems. The precedence relations as declared explicitly by the programmer must form a DAG, with the vertices being the operator classes with equal precedence. There are two ways you can break the DAG: by introducing a 'smaller' or 'larger' relation when another module has already declared them to have equal precedence (resp. the other way around); or by introducing a cycle. Both can be caused simply by importing yet another module. I think it would be unacceptable not to provide some way for the programmer to resolve such conflicts. One way to do so would be to give an explicit order of 'precedence declaration priority' (PDP) for conflicting modules. Any rule introduced by a module with a lower PDP would be dropped (in the order of appearance) if it causes the DAG to break. Simple to use but the resulting precedence order would be quite hard to predict! Another way would be to hide precedence relation(s) in the import declaration. However, this is practical only if precedence declarations have a name we can refer to. The cleanest variant would probably be to tear holes in the graph by removing all relations to a particular operator from the graph, making it an isolated vertice, and then re-introducing new precedence relations for this operator. You might have to do this with several operators and it would be quite difficult to find the minimal set of operators that together will do the trick (i.e. restore the DAG). Even if you manage to find out, it could well be that by removing an operator from the graph you will accidentally remove many other operators, too, for instance if all these other operators have been declared to have the same precedence than the one you removed. I am not a graph specialist; maybe there exist semi-automatic solutions for such problems in the literature, anyway I know of none, and it would would in any case be quite a 'heavy' extension. All in all, in must agree with Nils and say that I can't see a light-weight and elegant way to make this work. Too bad, really. Ben

On Thu, 2006-11-09 at 22:20 +0100, Benjamin Franksen wrote:
Henning Thielemann wrote:
Maybe making fixity declarations like type class instance declarations is good.
I thought so too at first but, having thought about it for a while, I now see this will cause /major/ problems. The precedence relations as declared explicitly by the programmer must form a DAG, with the vertices being the operator classes with equal precedence. There are two ways you can break the DAG: by introducing a 'smaller' or 'larger' relation when another module has already declared them to have equal precedence (resp. the other way around); or by introducing a cycle. Both can be caused simply by importing yet another module. I think it would be unacceptable not to provide some way for the programmer to resolve such conflicts.
[ ... possibilities for resolving conflicts omitted ... ] Another possibility is: If you have operators op1 and op2, where the compiler sees conflicting requirements for the precedence of op1 and op2, then they are treated as non-associative relative to each other: the expression a op1 b op2 c is illegal, and the programmer must instead write (a op1 b) op2 c or a op1 (b op2 c) Carl Witty

Carl Witty wrote:
On Thu, 2006-11-09 at 22:20 +0100, Benjamin Franksen wrote:
Henning Thielemann wrote:
Maybe making fixity declarations like type class instance declarations is good.
I thought so too at first but, having thought about it for a while, I now see this will cause /major/ problems. The precedence relations as declared explicitly by the programmer must form a DAG, with the vertices being the operator classes with equal precedence. There are two ways you can break the DAG: by introducing a 'smaller' or 'larger' relation when another module has already declared them to have equal precedence (resp. the other way around); or by introducing a cycle. Both can be caused simply by importing yet another module. I think it would be unacceptable not to provide some way for the programmer to resolve such conflicts.
[ ... possibilities for resolving conflicts omitted ... ]
Another possibility is:
If you have operators op1 and op2, where the compiler sees conflicting requirements for the precedence of op1 and op2, then they are treated as non-associative relative to each other: the expression a op1 b op2 c is illegal, and the programmer must instead write (a op1 b) op2 c or a op1 (b op2 c)
It's a possibility. However, I fear that such conflicting precedences might not come in nice little isolated pairs. For instance, each operator that is in the same precedence class as op1 (i.e. has been declared as having equal precedence) will now be 'incompatible' with any that is in the same class as op2, right? It gets worse if the conflict creates a cycle in a chain of large operator classes. Thus one single bad declaration can tear a gaping hole into an otherwise perfectly nice and consistent DAG of precedence order relations, possibly invalidating a whole lot of code. Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/. Ben

On Fri, 10 Nov 2006, Benjamin Franksen wrote:
Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/.
It's the same as with instance declarations, isn't it? I don't want to defend the problems arising with todays anonymous instance declarations, but I think a simple error report is better than trying to solve the problem automatically and thus hide it from the programmer.

Henning Thielemann wrote:
On Fri, 10 Nov 2006, Benjamin Franksen wrote:
Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/.
It's the same as with instance declarations, isn't it? I don't want to defend the problems arising with todays anonymous instance declarations,
Right. However, with instances I can imagine solutions that avoid naming them, that is, I could imagine to write something like select instance C T1 T2 ... Tn from module M or import M hiding (instance C T1 T2 ... Tn, ....) Such a feature could prove extremely useful in practice. Disclaimer: I am almost sure that there is something I have overlooked that makes such a simple solution impossible, otherwise it would have been proposed and implemented long ago...
but I think a simple error report is better than trying to solve the problem automatically and thus hide it from the programmer.
I agree 100% that error is better than silently change ("fix") semantics. However the fact that there is currently no way to manually resolve instance clashes coming from imported (library) modules is really problematic, IMHO. I think the only reason this hasn't yet produced major upheaval is that Haskell community is still quite small so library writers can still have most of the field in their eyeview, so to speak. If Haskell libraries were written and used in multitudes such as seen e.g. on CPAN, then the probability of conflicting instances would be a lot greater, in turn causing many libraries to be incompatible with each other. IMHO, this must be fixed before Haskell reaches even a fraction of that level of popularity. Non-total precedence order will give us more potential incompatibilities that the programmer has no way of resolving satisfactorily, so I'd rather stick with the current system, however limited. (And yes, I /have/ changed my mind on this. I'd /love/ to be convinced that this is not really going to be a problem but I am not and I hate it.) Cheers Ben

Benjamin Franksen wrote:
Henning Thielemann wrote:
On Fri, 10 Nov 2006, Benjamin Franksen wrote:
Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/.
It's the same as with instance declarations, isn't it? I don't want to defend the problems arising with todays anonymous instance declarations,
Right. However, with instances I can imagine solutions that avoid naming them, that is, I could imagine to write something like
select instance C T1 T2 ... Tn from module M
or
import M hiding (instance C T1 T2 ... Tn, ....)
Such a feature could prove extremely useful in practice.
Disclaimer: I am almost sure that there is something I have overlooked that makes such a simple solution impossible, otherwise it would have been proposed and implemented long ago...
I think the reason you can't do this kind of thing is that within any set of modules that is compiled at the same time, anywhere in any of these modules, if you have a type T that's an instance of class C, then all occurrences of C T must refer to the exact same instance declaration or else things would get totally messed up when/if the compiler did whole program optimization. In other words, the instances are actually properties of the type(s) themselves so allowing different modules to "see" different implementations of the instances would break the conceptual merging of modules (modulo renaming) that occurs at compile time. Regards, Brian. -- Logic empowers us and Love gives us purpose. Yet still phantoms restless for eras long past, congealed in the present in unthought forms, strive mightily unseen to destroy us. http://www.metamilk.com

On Nov 9, 2006, at 7:16 PM, Benjamin Franksen wrote:
Carl Witty wrote:
On Thu, 2006-11-09 at 22:20 +0100, Benjamin Franksen wrote:
Henning Thielemann wrote:
Maybe making fixity declarations like type class instance declarations is good.
I thought so too at first but, having thought about it for a while, I now see this will cause /major/ problems. The precedence relations as declared explicitly by the programmer must form a DAG, with the vertices being the operator classes with equal precedence. There are two ways you can break the DAG: by introducing a 'smaller' or 'larger' relation when another module has already declared them to have equal precedence (resp. the other way around); or by introducing a cycle. Both can be caused simply by importing yet another module. I think it would be unacceptable not to provide some way for the programmer to resolve such conflicts.
[ ... possibilities for resolving conflicts omitted ... ]
Another possibility is:
If you have operators op1 and op2, where the compiler sees conflicting requirements for the precedence of op1 and op2, then they are treated as non-associative relative to each other: the expression a op1 b op2 c is illegal, and the programmer must instead write (a op1 b) op2 c or a op1 (b op2 c)
It's a possibility. However, I fear that such conflicting precedences might not come in nice little isolated pairs. For instance, each operator that is in the same precedence class as op1 (i.e. has been declared as having equal precedence) will now be 'incompatible' with any that is in the same class as op2, right?
Well, look at it from the perspective of the reader. Does the reader of your code know beyond a shadow of a doubt what the intended precedence will be in these cases? If not, there should be parentheses there---quite apart from what the parser may or may not permit you to do. If the parser can't figure it out, you can bet your readers will have trouble as well.
It gets worse if the conflict creates a cycle in a chain of large operator classes. Thus one single bad declaration can tear a gaping hole into an otherwise perfectly nice and consistent DAG of precedence order relations, possibly invalidating a whole lot of code.
Requiring parenthesization solves these problems in a stroke. -Jan-Willem Maessen who can't reliably parenthesize the C expression a==b << 3&&4 | 17 (yes, the horrific whitespace is deliberate!)
Although one could view this as a bug in the offending module it makes me somewhat uneasy that one additional import can have such a drastic effect on the code in a module /even if you don't use anything from that module/.
Ben
_______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe

Jan-Willem Maessen wrote:
On Nov 9, 2006, at 7:16 PM, Benjamin Franksen wrote:
Carl Witty wrote:
If you have operators op1 and op2, where the compiler sees conflicting requirements for the precedence of op1 and op2, then they are treated as non-associative relative to each other: the expression a op1 b op2 c is illegal, and the programmer must instead write (a op1 b) op2 c or a op1 (b op2 c)
It's a possibility. However, I fear that such conflicting precedences might not come in nice little isolated pairs. For instance, each operator that is in the same precedence class as op1 (i.e. has been declared as having equal precedence) will now be 'incompatible' with any that is in the same class as op2, right?
Well, look at it from the perspective of the reader. Does the reader of your code know beyond a shadow of a doubt what the intended precedence will be in these cases? If not, there should be parentheses there---quite apart from what the parser may or may not permit you to do. If the parser can't figure it out, you can bet your readers will have trouble as well.
Imagine op1=(+), op2=(*). Would you think that it is acceptable if any wild module can come along and destroy the relative precedence order everyone espects to hold between those two? For this to happen it would be enough if M1 says prec (<+>) = prec (+) prec (<*>) = prec (*) while M2 says prec (<>) = prec (<*>) and M3 prec (<>) = prec (<+>) All modules M1, M2, and M3, when viewed independently, and even when viewed in pairwise combination, don't do anything bad. It is only the combination of all three that cause the expression 3 + 4 * 3 to become a syntax error! Ben

On Tue, 7 Nov 2006, David House wrote:
On 07/11/06, Jon Fairbairn
wrote: I must say though, that I don't like the reasoning that we can put in fractional fixities because it's a small change. The way to hell is through a series of small steps. If using integers to express fixities is a bit of a hack, switching to rational numbers is a hack on top of a hack.
Well, It's a _conceptually_ simple idea, one that doesn't make understanding the language much harder.
Also, it provides an infinite space for fixities. I think the problem 'binds tighter than X but not as tight as Y', where X and Y are only fixity integer apart is somewhat common, and this would fix it.
In school we learnt "dot operations (multiplication, division) bind more tightly than dash operations (addition, subtraction)". I imagine we would have learnt "dot operations have precedence 7, dash operations have precedence 6". :-)

Simon Marlow wrote:
http://hackage.haskell.org/cgi-bin/haskell-prime/trac.cgi/wiki/FixityResolut...
What's the fate of unary minus under that proposal? In the Haskell report its syntax is part of the lexp^6 production. This production makes it possible to write (-1+) instead of (subtract 1), although that's not really advisable because Hugs fails to parse (-1+). Interestingly, hugs allows (+ -1) instead, but I believe that's wrong. Bertram

I'm surprised that no one has mentioned showsPrec and readsPrec. Anything more complicated than negative fixities would require their interfaces to be changed. -- Ben

On Fri, 10 Nov 2006, Ben Rudiak-Gould wrote:
I'm surprised that no one has mentioned showsPrec and readsPrec. Anything more complicated than negative fixities would require their interfaces to be changed.
Very true. Does it mean, that the Functional Graph Library has to become part of the Prelude?

On Fri, 2006-11-03 at 18:07 -0800, Dan Weston wrote:
Here's an idea that (I think) is useful and backwards compatible: fractional and negative fixity.
There have been 3 separate times where I've wanted an operator just above 0 ($) but less than 1 (>>= or >>>), or else just below 0 (like a superlow $$)
infix 0.5 ??? infix -1 $$
The only change would be internal to compiler, wouldn't it? Since fixity is just syntactic sugar, there should be no semantic difficulties.
I like it! One caveat: the grammar from the Haskell Report specifies a fixed number of precedence levels; if anybody has a Haskell parser implemented by feeding this grammar into a parser generator, they would have to rewrite the parser to implement this. (But it's quite possible that nobody has such a parser; the alternate approach of using a first pass that ignores precedences and fixing them up with a second-pass hand-written operator precedence parser seems to be easier and more popular, and should be easy to adapt to new precedence levels.) Carl Witty

I started this e-mail thread on HaskellCafe instead of HaskellPrime because it was minimal, backwards-compatible, valid Haskell 98 (or very nearly so) and could go (now) into GHC if someone saw fit to put it in.
If you think C++ is not overly complicated, just what is a protected abstract virtual base pure virtual private destructor, and when was the last time you needed one? -- Tom Cargil, C++ Journal
The above was not a proposal for Haskell'.
Proposers of new [C++] features should be required to donate a kidney. > That would - Jim [Waldo] pointed out - make people think hard before proposing, and even people without any sense would propose at most two extensions. -- Bjarne Stroustrup, creator of C++
But at the risk of losing a kidney... Here's where I think Haskell 98 got it wrong: "A fixity declaration may appear anywhere that a type signature appears and, like a type signature, declares a property of a particular operator." Like a type signature? A type signature has semantic value, fixity does not. In principle, neither does the name that a lambda abstraction is bound to in top-level definitions, but we are stuck with linkers that know about names. They don't know about fixity. Operator precedence is a purely syntactic device for people (like me) who hate parentheses. Its "scope" is essentially local to the attention span of a human being (i.e. a single module). A preprocessor can parethesize code to eliminate the need for fixity. I hate chasing other modules looking up operator fixity and would rather just specify it myself: What if we could remap fixity on import: -- (MyModule.???) fixity is 4, exported as its default... import MyModule((???)) -- But in this module I prefer 5... -- This fixity is *not* exported! infix 5 ??? Modules that import this module get the default fixity from the source. Summary: fixity at definition is exported, fixity on import is locally remapped but not reexported.
Better is the enemy of the good -- Voltaire
Simple, powerful, minimal. Any takers? Dan Dan Weston wrote:
Here's an idea that (I think) is useful and backwards compatible: fractional and negative fixity.
There have been 3 separate times where I've wanted an operator just above 0 ($) but less than 1 (>>= or >>>), or else just below 0 (like a superlow $$)
infix 0.5 ??? infix -1 $$
The only change would be internal to compiler, wouldn't it? Since fixity is just syntactic sugar, there should be no semantic difficulties.
Dan
participants (21)
-
Arie Peterson
-
Ben Rudiak-Gould
-
Benjamin Franksen
-
Bertram Felgenhauer
-
Brian Hulley
-
Bulat Ziganshin
-
Carl Witty
-
Dan Weston
-
David House
-
Henning Thielemann
-
Henning Thielemann
-
J. Garrett Morris
-
Jan-Willem Maessen
-
Jim Apple
-
Joachim Durchholz
-
Jon Fairbairn
-
Jón Fairbairn
-
Lennart Augustsson
-
Nicolas Frisby
-
Nils Anders Danielsson
-
Simon Marlow