
Hi café, Why are && and || in the Prelude right-associative? This contradicts my expectation and the way these work in other languages. That said, I can't think of any harm in it. This came up from a question asked by a student, and I have no idea why the design is this way. Thanks, Richard

Could it be so that you can shortcut in the expected order (left to right)?
Left associative:
a && b && c = (a && b) && c which means going into a && b, which means
going into a, and if it is False, then going up in the expression tree.
If it is right associative:
a && b && c = a && (b && c), which means going into a, and if it is False,
you are done.
If you have a conjunction of n booleans, the complexity of evaluating this
expression is linear with respect to the position of the first False (in
the case of &&). In the left-associative case, it is linear in the number
of &&s.
Just a guess. But you got me interested now.
Does anyone have the real explanation?
Cheers,
Ivan
On Thu, 11 Apr 2019 at 22:13, Richard Eisenberg
Hi café,
Why are && and || in the Prelude right-associative? This contradicts my expectation and the way these work in other languages. That said, I can't think of any harm in it. This came up from a question asked by a student, and I have no idea why the design is this way.
Thanks, Richard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Am 12.04.19 um 04:26 schrieb Ivan Perez:
Could it be so that you can shortcut in the expected order (left to right)?
Left associative: a && b && c = (a && b) && c which means going into a && b, which means going into a, and if it is False, then going up in the expression tree.
For compile-time evaluation of known-to-be-constant values, this is what would indeed happen, but it wouldn't matter because such evaluation is done O(1) times. Generated code will simply evaluate the conditions one after the other and abort as soon as it sees False.
If you have a conjunction of n booleans, the complexity of evaluating this expression is linear with respect to the position of the first False (in the case of &&). In the left-associative case, it is linear in the number of &&s. This isn't the case.

On 4/12/19 6:42 AM, Joachim Durchholz wrote:
Am 12.04.19 um 04:26 schrieb Ivan Perez:
Could it be so that you can shortcut in the expected order (left to right)?
Left associative: a && b && c = (a && b) && c which means going into a && b, which means going into a, and if it is False, then going up in the expression tree.
For compile-time evaluation of known-to-be-constant values, this is what would indeed happen, but it wouldn't matter because such evaluation is done O(1) times. Generated code will simply evaluate the conditions one after the other and abort as soon as it sees False.
If you have a conjunction of n booleans, the complexity of evaluating this expression is linear with respect to the position of the first False (in the case of &&). In the left-associative case, it is linear in the number of &&s. This isn't the case.
The program below is evidence that it *is* the case: the way the expression is associated has an effect on run time. Adding more (&&) in the condition of the following function doesn't change the run time, but substituting the infixl variant (&.) does result in a measurable growth linear in the number of (&.). Of course, this is true only without optimizations, but the distinction is there, and many people do not have intuition about what is and isn't optimized by GHC, so this is certainly a worthwhile point of discussion. Li-yao import Control.Monad f :: Bool -> IO () f b = if b && True -- 9 more && True && True && True && True && True && True && True && True then error "oops" else return () (&.) :: Bool -> Bool -> Bool (&.) = (&&) infixl 1 &. main :: IO () main = do mapM_ f (replicate 1000000 False)

I don't know the historical answer, but I think it's because the true
fixity can't be expressed in Haskell. As far as I can tell, there's no
operator with the same precedence as && or || that can be meaningfully
combined with it. But if these operators were defined just "infix", then
we'd have to write junk like x || (y || z). So instead we picked a
direction out of a bag and never had a reason to look back.
On Thu, Apr 11, 2019, 10:13 PM Richard Eisenberg
Hi café,
Why are && and || in the Prelude right-associative? This contradicts my expectation and the way these work in other languages. That said, I can't think of any harm in it. This came up from a question asked by a student, and I have no idea why the design is this way.
Thanks, Richard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

David Feuer
I don't know the historical answer, but I think it's because the true fixity can't be expressed in Haskell.
No, the historical answer is that with lazy evaluation the shortcutting happens in the expected order. We did think about that. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk

I don't know the historical answer, but I think it's because the true fixity can't be expressed in Haskell. No, the historical answer is that with lazy evaluation the shortcutting happens in the expected order. We did think about that.
I don't understand how laziness enters the picture: (False && ⊥) && ⊥ ≡ False False && (⊥ && ⊥) ≡ False in both cases we get the same result. Stefan

Er? Without laziness, you're going to try to evaluate the bottoms
regardless of where they are. Or are you asserting that the
short-circuiting done by many strict languages is their standard evaluation
model?
On Fri, Apr 12, 2019 at 7:32 PM Stefan Monnier
I don't know the historical answer, but I think it's because the true fixity can't be expressed in Haskell. No, the historical answer is that with lazy evaluation the shortcutting happens in the expected order. We did think about that.
I don't understand how laziness enters the picture:
(False && ⊥) && ⊥ ≡ False False && (⊥ && ⊥) ≡ False
in both cases we get the same result.
Stefan
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh allbery.b@gmail.com

I think Brandon's point is that short-circuiting is in fact an example of
lazy evaluation, regardless of the language being otherwise strict.
On Fri, Apr 12, 2019, 4:52 PM Stefan Monnier
Er? Without laziness, you're going to try to evaluate the bottoms regardless of where they are.
Exactly: with lazyness, either associativity gives the same result, and without lazyness either associativity also gives the same result. The two seem orthogonal to me.
Stefan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Exactly. Short-circuiting is emulating laziness in this one case where it
turns out to be generally useful. And while (_|_ && _|_) may be evaluatable
from a logical standpoint, computer languages tend to not do well with it:
regardless of how it evaluates, (&&) is going to try to force at least one
of the bottoms.
On Fri, Apr 12, 2019 at 9:19 PM Theodore Lief Gannon
I think Brandon's point is that short-circuiting is in fact an example of lazy evaluation, regardless of the language being otherwise strict.
On Fri, Apr 12, 2019, 4:52 PM Stefan Monnier
wrote: Er? Without laziness, you're going to try to evaluate the bottoms regardless of where they are.
Exactly: with lazyness, either associativity gives the same result, and without lazyness either associativity also gives the same result. The two seem orthogonal to me.
Stefan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- brandon s allbery kf8nh allbery.b@gmail.com

I think Brandon's point is that short-circuiting is in fact an example of lazy evaluation, regardless of the language being otherwise strict.
Sure, call it "short-circuiting", my comment stays the same: in which way is it related to *associativity*? In terms of semantics, all three (++), (||), and (&&) are associative AFAICT, so the choice of which associativity to use in the syntax is not related to the semantics (instead it's likely related to secondary concerns like efficiency). Stefan

I don't understand how laziness enters the picture:
(False && ⊥) && ⊥ ≡ False False && (⊥ && ⊥) ≡ False
in both cases we get the same result.
The first expression builds two thunks before trying the leftmost operand, and the second one only builds one thunk. More generally, a left-associative conjunction of n lazy Bools will build n - 1 thunks at once when forced, but a right-associative one will have only one at a time, though it may have to iterate through all n - 1 before finishing.

How does the right associativity of the short-circuiting
Boolean operators in any way contradict the way that such operators work in
other languages? These operators are associative, so a && (b && c)
necessarily has the same value and effects as (a && b) && c. It has never
been the case that all operators in all programming languages were left
associative. For addition and subtraction it matters; you don't want a-b+c
interpreted as a-(b+c), but not for || and not for &&. My expectation is
that these operators should be right associative.
On Fri, 12 Apr 2019 at 14:13, Richard Eisenberg
Hi café,
Why are && and || in the Prelude right-associative? This contradicts my expectation and the way these work in other languages. That said, I can't think of any harm in it. This came up from a question asked by a student, and I have no idea why the design is this way.
Thanks, Richard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.

Correct me if I'm wrong here.
On Fri, 12 Apr 2019 at 05:21, Richard O'Keefe
How does the right associativity of the short-circuiting Boolean operators in any way contradict the way that such operators work in other languages? These operators are associative, so a && (b && c) necessarily has the same value and effects as (a && b) && c.
In pure Haskell, perhaps, but in other languages, I would say no. In a language like C, I would expect that: - a && b && c be represented in the AST as (a && b) && c - The compiler optimizes the implementation of && to short circuit, which is, in some way, using laziness. This is not to say that they are right-associative; it's just a compiler optimization.
It has never been the case that all operators in all programming languages were left associative. For addition and subtraction it matters; you don't want a-b+c interpreted as a-(b+c), but not for || and not for &&. My expectation is that these operators should be right associative.
I can't find any reference for logic itself and, because /\ is introduced as associative from the start in propositional logic, it does not really matter. However, my training as a kid in math and the exposure to how I studied to solve (+) left to right (implicitly associating to the left) would have led me to intuitively parse / parenthesize conjunctions with multiple (&&) the same way unless instructed otherwise. I think this portion of the Haskell Report is also relevant to this intuition in the case of haskell programmers: "If no fixity declaration is given for `op` then it defaults to highest precedence and left associativity" (Section 4.4.2). Cheers Ivan

I think this portion of the Haskell Report is also relevant to this intuition in the case of haskell programmers: "If no fixity declaration is given for `op` then it defaults to highest precedence and left associativity" (Section 4.4.2).
Sorry, this is from section 3.2. Section 4.4.2 goes on to say: "Any operator lacking a fixity declaration is assumed to be infixl 9" Ivan

I repeat: the short circuit operations are associative,
so a && (b && c) necessarily has the same value and effects
as (a && b) && c. And this is just as true in C as in
Haskell. It is equally true in SML and Erlang (which
use andalso and orelse), Pascal Extended and Ada, OCaml
and F#, and Unix shells, to mention but a few.
Because the operations are associative, the associativity
of the operators is of no interest. Associativity is of
interest only when (a) there is more than one operator at
the same precedence level or (b) the operation is not
associative.
Your "training as a kid in math" almost certainly did not
include any discussion of logical operators, and I would
be astonished if you had been told that
a ** b ** c
was defined to be
a ** (b ** c)
back in 1950-something, or that there is a famous programming
language where A-B-C-D means A-(B-(C-D)) that is nearly as
old. Your "training as a kid in math" probably did not
include operators which are neither left associative nor
right associative but have a hidden conjunction, e.g.,
a <= b < c
does not, in a sane programming language (that is, a
language that is *not* C and didn't copy its blunders),
means a <= b && b < c (unless it is a syntax error).
Ada and SETLX do something interesting. They put
conjunction and disjunction at the same level, refuse
to define an associativity for them, and forbid mixing
them. That is, 'p and then q or else r' is simply not
legal in Ada.
Programming languages have done more and weirder things
with operators than you imagine. Take nothing for granted.
On Fri, 12 Apr 2019 at 21:45, Ivan Perez
Correct me if I'm wrong here.
On Fri, 12 Apr 2019 at 05:21, Richard O'Keefe
wrote: How does the right associativity of the short-circuiting Boolean operators in any way contradict the way that such operators work in other languages? These operators are associative, so a && (b && c) necessarily has the same value and effects as (a && b) && c.
In pure Haskell, perhaps, but in other languages, I would say no.
In a language like C, I would expect that: - a && b && c be represented in the AST as (a && b) && c - The compiler optimizes the implementation of && to short circuit, which is, in some way, using laziness.
This is not to say that they are right-associative; it's just a compiler optimization.
It has never been the case that all operators in all programming languages were left associative. For addition and subtraction it matters; you don't want a-b+c interpreted as a-(b+c), but not for || and not for &&. My expectation is that these operators should be right associative.
I can't find any reference for logic itself and, because /\ is introduced as associative from the start in propositional logic, it does not really matter. However, my training as a kid in math and the exposure to how I studied to solve (+) left to right (implicitly associating to the left) would have led me to intuitively parse / parenthesize conjunctions with multiple (&&) the same way unless instructed otherwise.
I think this portion of the Haskell Report is also relevant to this intuition in the case of haskell programmers: "If no fixity declaration is given for `op` then it defaults to highest precedence and left associativity" (Section 4.4.2).
Cheers
Ivan

Your "training as a kid in math" almost certainly did not include any discussion of logical operators, and I would be astonished if you had been told that a ** b ** c was defined to be a ** (b ** c) back in 1950-something, or that there is a famous programming language where A-B-C-D means A-(B-(C-D)) that is nearly as old.
This is clearly not what I said. Also, I never implied my training to be 'special'. I'm sure people were taught math similarly. Finally, and very important: I would like to ask that you stop speculating about what I did or did not learn, or when, to try and keep this civil, and focus on the technical discussion, and not the people. Ivan

Am 13.04.19 um 12:14 schrieb Richard O'Keefe:
I would be astonished if you had been told that a ** b ** c was defined to be a ** (b ** c) back in 1950-something,
Actually we were told, with the reasoning that (a ** b) ** c is the same as a ** (b * c), I recall that that was presented as "nothing new there so not worth defining it that way). Truth be told, that was the 1970-something for me.

Le 13/04/2019 à 14:29, Joachim Durchholz cites Richard O'Keefe :
I would be astonished if you had been told that a ** b ** c was defined to be a ** (b ** c) back in 1950-something,
Actually we were told, with the reasoning that (a ** b) ** c is the same as a ** (b * c), I recall that that was presented as "nothing new there so not worth defining it that way).
Truth be told, that was the 1970-something for me.
'70?? Even worse... I began my school in '50-something, and I was duly taught that. And without "nothing new here", which I find rather unpleasantly surprising. My teacher pointed out that (a**b)**c is equal to a**(b*c), so the left associativity would not be extremely clever. Joachim says in his previous posting:
I guess my intuition is more based on math, where associativity is an irrelevant detail Now, this is for me a *REALLY* peculiar vision of math. Irrelevant detail?? Where? In the categorical calculus perhaps? Abandon the associativity of morphisms, and you will see...
In Lie algebras maybe? Well, add the associativity to it, and kill all the quantum theory. Good luck. There are many people, mainly young (e.g. my students) who have a tendency to "see mathematics" through "computer lenses" - parsing, implementable data structures, recursion as an implementation detail, etc. For the mathematical culture this is harmful. Jerzy Karczmarczuk

Am 13.04.19 um 15:34 schrieb Jerzy Karczmarczuk:
Joachim says in his previous posting:
I guess my intuition is more based on math, where associativity is an irrelevant detail Now, this is for me a *REALLY* peculiar vision of math. Irrelevant detail??
Please look at the context: This is about left vs. right associativity (the parsing property). Which is irrelevant if the operator is associative (the algebraic property). Regards, Jo

On Apr 12, 2019, at 5:21 AM, Richard O'Keefe
wrote: How does the right associativity of the short-circuiting Boolean operators in any way contradict the way that such operators work in other languages?
If you look at definitions of other languages (C, Java), you see that the operators are defined to be left-associative. Perhaps those other languages got it wrong, then. :) In any case, this conversation has been illuminating. Thanks! Richard

On 2019-04-12 8:47 AM, Richard Eisenberg wrote:
Perhaps those other languages got it wrong, then. :)
I don't think so. It's just that short-circuit evaluation isn't a direct consequence of associativity in those languages because they're otherwise strict and this is an exception to their default semantics.

On 2019-04-12 10:47 a.m., Richard Eisenberg wrote:
If you look at definitions of other languages (C, Java), you see that the operators are defined to be left-associative. Perhaps those other languages got it wrong, then. :)
In any case, this conversation has been illuminating. Thanks!
I am late to this discussion but here is my solution. This is really just story-telling to end-users. The real story you want to tell everyone is this: "x && y && z && t" means Scheme's "(and x y z t)", and it means you try the sequence from left to right, stopping at the first incident of "false". To those people who evaluate an AST in the bottom-up order, e.g., C programmers, this sounds like left-associating because the left is more likely evaluated so these people need to envision a left-leaning AST. So you comfort them with "yeah!". To those people who evaluate an AST in the top-down order, e.g., Haskell programmers, this sounds like right-associating because the right is more likely skipped so these people need to envision a right-leaning AST. So you comfort them with "yeah!". So either (both C and Haskell are right) or (both C and Haskell are wrong). As many of you have observed, it doesn't matter, a compiler writer already knows it's "(and x y z t)" and generates the correct code and not bother to split hair.

Le 19/04/2019 à 19:31, Albert Y. C. Lai a écrit :
I am late to this discussion but here is my solution.
This is really just story-telling to end-users.
The real story you want to tell everyone is this: "x && y && z && t" means Scheme's "(and x y z t)", and it means you try the sequence from left to right, stopping at the first incident of "false". /.../ As many of you have observed, it doesn't matter, a compiler writer already knows it's "(and x y z t)" and generates the correct code and not bother to split hair.
Very, ehm, interesting methodology... I suspect that you missed that part of the discussion where people discussed parsing. I don't know if you ever taught compilation, but imagine that your students ask you: */HOW /**/"x && y && z && t" is transformed into /**/"(and x y z t)" ?/* Will your answer be: */it doesn't matter, a compiler writer already knows it's "(and x y z t)" and generates the correct code and not bother to split hair/* Everybody will be happy. Bon courage. Jerzy Karczmarczuk */ /*

On 2019-04-19 3:43 p.m., Jerzy Karczmarczuk wrote:
Le 19/04/2019 à 19:31, Albert Y. C. Lai a écrit :
I am late to this discussion but here is my solution.
This is really just story-telling to end-users.
The real story you want to tell everyone is this: "x && y && z && t" means Scheme's "(and x y z t)", and it means you try the sequence from left to right, stopping at the first incident of "false". /.../ As many of you have observed, it doesn't matter, a compiler writer already knows it's "(and x y z t)" and generates the correct code and not bother to split hair.
Very, ehm, interesting methodology...
I suspect that you missed that part of the discussion where people discussed parsing. I don't know if you ever taught compilation, but imagine that your students ask you:
*/HOW /**/"x && y && z && t" is transformed into /**/"(and x y z t)" ?/*
Will your answer be:
*/it doesn't matter, a compiler writer already knows it's "(and x y z t)" and generates the correct code and not bother to split hair/*
What would you tell students about commas and semicolons in the following? Are these commas and semicolons left associating? Right associating? Both? Neither? Has anyone even asked? How to parse them? I would tell the same. Pascal's "begin foo() ; tora() ; tigger() end" C's "x = (y=10 , y=f(y) , y=g(y) , y);" Haskell's "f x | g x > 0 , h x < 0 , sin x > 0 = ()" Prolog's "g(X,Y) :- parent(X,C1) , parent(C1,C2) , parent(C2,Y)." Matlab's "[3+4i , 3 , 5-i ; 1-i , 1+i , 1 ; 7+8i , 4-3i , -i]"

After the non-answer of Albert Y. C. Lai about the associativity of logical connectives:
a compiler writer already knows it's "(and x y z t)" and generates the correct code and not bother to split hair.
I issued a bit acrimonious remark pointing out that a parsing question should not be answered that a compiler writer "knows".
imagine that your students ask you: */HOW /**/"x && y && z && t" is transformed into /**/"(and x y z t)/*
I got my reward...
What would you tell students about commas and semicolons in the following? Are these commas and semicolons left associating? Right associating? Both? Neither? Has anyone even asked? How to parse them? I would tell the same.
Pascal's "begin foo() ; tora() ; tigger() end" C's "x = (y=10 , y=f(y) , y=g(y) , y);" Haskell's "f x | g x > 0 , h x < 0 , sin x > 0 = ()" Prolog's "g(X,Y) :- parent(X,C1) , parent(C1,C2) , parent(C2,Y)." Matlab's "[3+4i , 3 , 5-i ; 1-i , 1+i , 1 ; 7+8i , 4-3i , -i]"
Now I don't know whether Albert Y. C. Lai is pulling my leg, or he really confuses the operator grammars and other ways of parsing... Everybody who taught Prolog knows that commas and semicolons in this language ARE logical connectives, so replacing one non-answer by another one "I would tell the same" is not an appropriate response. There IS a concrete answer to this question, both operators are xfy with well defined precedence. In Pascal commas and semicolons are not operators at all, and the standard parsing is a recursive top-down old machinery (well, it was, when I studied the Wirth & Amman compiler sources). The associativity is implicitly specified by the grammar productions. In Matlab the syntactic connectives in matrices are not operators either. In one detail Albert Y. C. Lai is absolutely right, namely that this discussion is completely pointless. Jerzy Karczmarczuk

Am 12.04.19 um 11:21 schrieb Richard O'Keefe:
It has never been the case that all operators in all programming languages were left associative. For addition and subtraction it matters; you don't want a-b+c interpreted as a-(b+c), but not for || and not for &&. My expectation is that these operators should be right associative.
What is the basis for this expectation? My expectation would be left associativity, just because I read stuff from left to right, and left associativity is what I get if I mentally parse from left to right. So I'm curious what thought process arrives at the opposite expectation. Regards, Jo

On Apr 12, 2019, at 3:15 PM, Joachim Durchholz
wrote: What is the basis for this expectation? My expectation would be left associativity, just because I read stuff from left to right, and left associativity is what I get if I mentally parse from left to right. So I'm curious what thought process arrives at the opposite expectation.
Since (&&) short-circuits on first failure, and (||) short-circuits on first success, right associativity is more intuitive: a && b && c == a && (b && c) a || b || c == a || (b || c) as each can stop immediately when `a` is respectively False or True. This matches the fact that (&&) is naturally strict in its first argument and and lazy in the second, which works well with currying. Also consider that, for example, in "ghci": foldr (&&) True (replicate 100000000000 False) returns immediately, while: foldr (&&) True (replicate 100000000000 False) hangs, which clearly shows that right folds are the more natural way to combine these boolean operators. And reading left to right, *is* IMHO right associative! You see: a || ... and immediately process a, then move on to what follows. When reading an English sentence, you don't have to reach the last word before you can start to make sense of the preceding words, for left associativity, try German... [ Oops! Never mind, perhaps that explains the difference in perspective. :-) :-) ] -- Viktor.

Am 13.04.19 um 01:16 schrieb Viktor Dukhovni:
On Apr 12, 2019, at 3:15 PM, Joachim Durchholz
wrote: What is the basis for this expectation? My expectation would be left associativity, just because I read stuff from left to right, and left associativity is what I get if I mentally parse from left to right. So I'm curious what thought process arrives at the opposite expectation.
Since (&&) short-circuits on first failure, and (||) short-circuits on first success, right associativity is more intuitive:
a && b && c == a && (b && c) a || b || c == a || (b || c)
as each can stop immediately when `a` is respectively False or True. This matches the fact that (&&) is naturally strict in its first argument and and lazy in the second, which works well with currying.
I guess my intuition is more based on math, where associativity is an irrelevant detail, then LR parsing, where left associativity requires less stack work. BTW I have a feeling that the LR parsing process is pretty natural: we read symbols from left to right, mentally combining them into groups as soon as possible so we can abstract away from individual symbols.
Also consider that, for example, in "ghci":
foldr (&&) True (replicate 100000000000 False)
returns immediately, while:
foldr (&&) True (replicate 100000000000 False)
hangs, which clearly shows that right folds are the more natural way to combine these boolean operators.
That's a Haskellism. Not that anything is wrong with that, of course :-)
And reading left to right, *is* IMHO right associative! You see:
a || ...
and immediately process a, then move on to what follows.
No, you see a || ... but you don't process anything, there's just a and ||. Then you see a || b ... but you still don't know what to do with that, because maybe the next operator has higher precedence (maybe &&, maybe even + or *). Then you see a || b || ... and now you can tick off the first three symbols: whatever || ... where somewhere back in your mind, whatever === a || b.
When reading an English sentence, you don't have to reach the last word before you can start to make sense of the preceding words, for left associativity, try German...
[ Oops! Never mind, perhaps that explains the difference in perspective. :-) :-) ]
Don't worry :-) Though I don't think that natural language processing works at any conscious level, so I don't think that that influence is too important. I German were the main influence, I would have to insist on postfix btw.
participants (16)
-
Albert Y. C. Lai
-
Brandon Allbery
-
David Feuer
-
Ivan Perez
-
Jerzy Karczmarczuk
-
Joachim Durchholz
-
Jon Fairbairn
-
Li-yao Xia
-
Neil Mayhew
-
Richard Eisenberg
-
Richard O'Keefe
-
Ryan Reich
-
Stefan Monnier
-
Stefan Monnier
-
Theodore Lief Gannon
-
Viktor Dukhovni