Why is $ right associative instead of left associative?

Hi - In the Haskell98 report section 4.4.2 $ is specified as being right associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0 a1 (b0 b1)) which seems rather strange to me. Surely it would be much more useful if $ were defined as left associative so that it could be used to separate the args to f? Does anyone know why this strange associativity was chosen? Thanks, Brian. (The reason I'm asking is that I'm working on the syntax of a language similar to Haskell but which uses layout to allow expressions like: f #$ -- can be followed by an explicit block or layout block a0 a1 b0 b1 which is sugar for (f $ a0 a1) $ b0 b1 ie f (a0 a1) (b0 b1) ) and I was surprised to discover that the parentheses are needed for the most obvious reading)

On Sat, Feb 04, 2006 at 02:52:20PM -0000, Brian Hulley wrote:
Hi - In the Haskell98 report section 4.4.2 $ is specified as being right associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0 a1 (b0 b1)) which seems rather strange to me. Surely it would be much more useful if $ were defined as left associative so that it could be used to separate the args to f?
Does anyone know why this strange associativity was chosen?
Probably it was anticipated that right associative version will be more useful. You can use it to create a chain of transformations, similar to a chain of composed functions: (f . g . h) x = f $ g $ h $ x Example: map f $ group $ sort $ filter g $ l But of course, left associative version can also be useful. Some time ago I used a left associative version of the strict application operator, which I named (!$). Anyway, you can't always remove all parentheses. And why would you want to? Everybody is used to them. Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Tomasz Zielonka wrote:
On Sat, Feb 04, 2006 at 02:52:20PM -0000, Brian Hulley wrote:
Hi - In the Haskell98 report section 4.4.2 $ is specified as being right associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0 a1 (b0 b1)) which seems rather strange to me. Surely it would be much more useful if $ were defined as left associative so that it could be used to separate the args to f?
Does anyone know why this strange associativity was chosen?
Probably it was anticipated that right associative version will be more useful. You can use it to create a chain of transformations, similar to a chain of composed functions:
(f . g . h) x = f $ g $ h $ x
Example:
map f $ group $ sort $ filter g $ l
But of course, left associative version can also be useful. Some time ago I used a left associative version of the strict application operator, which I named (!$).
I wonder if anyone has done empirical studies to determine scientifically which associativity would be more useful in practice eg by analysis of source code involving $ and comparing the number of parentheses that would be needed in each case, and perhaps also some studies involving the number of confused readers in each case... Even though both versions are useful, it seems to me that faced with the choice of choosing an associativity for an operator that does function application, and given that prefix application is left associative, there is one clear winner, but unfortunately the Haskell committee didn't see it this way, and perhaps it is too late to ever change this (just like :: and : which were mixed up for reasons unknown). Especially since chains can already be composed using "." .
Anyway, you can't always remove all parentheses. And why would you want to? Everybody is used to them.
$'s advertised purpose is to remove parentheses, but I agree that parenthesized code is often more readable (especially when operators have unexpected fixities... :-)) Regards, Brian.

Brian Hulley wrote:
Tomasz Zielonka wrote:
On Sat, Feb 04, 2006 at 02:52:20PM -0000, Brian Hulley wrote:
Hi - In the Haskell98 report section 4.4.2 $ is specified as being right associative. This means that f $ a0 a1 $ b0 b1 would parse as f (a0 a1 (b0 b1)) which seems rather strange to me. Surely it would be much more useful if $ were defined as left associative so that it could be used to separate the args to f?
Does anyone know why this strange associativity was chosen?
Probably it was anticipated that right associative version will be more useful. You can use it to create a chain of transformations, similar to a chain of composed functions:
(f . g . h) x = f $ g $ h $ x
Actually I'm beginning to think this might be more useful after all.
Example:
map f $ group $ sort $ filter g $ l
But of course, left associative version can also be useful. Some time ago I used a left associative version of the strict application operator, which I named (!$).
I suppose I could use $$ for left associative application, and #$$ for layout application.
I wonder if anyone has done empirical studies to determine scientifically which associativity would be more useful in practice eg by analysis of source code involving $ and comparing the number of parentheses that would be needed in each case, and perhaps also some studies involving the number of confused readers in each case...
Even though both versions are useful, it seems to me that faced with the choice of choosing an associativity for an operator that does function application, and given that prefix application is left associative, there is one clear winner, but unfortunately the Haskell committee didn't see it this way, and perhaps it is too late to ever change this (just like :: and : which were mixed up for reasons unknown). Especially since chains can already be composed using "." .
It would be very useful if the Haskell report explained *why* decisions were made, because there often seem to be good reasons that are not immediately obvious and sometimes counter intuitive. I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively. Regards, Brian.

On Sat, Feb 04, 2006 at 07:15:47PM -0000, Brian Hulley wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I am not convinced. Even if you really want to write types for every top-level binding, it's only one :: per binding, which can have a definition spanning for many lines and as complicated type as you want. On the other hand, when you are doing complicated list processing, it is not uncommon to have four (or more) :'s per _line_. Personally, I started my FP adventure with OCaml (which has the thing the other way around), and I felt that the meanings of :: and : should be reversed - before I even knew Haskell! Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Brian wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I don't think Haskell Prime should be about changing the look and feel of the language. Regards, Stefan -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.1 (Darwin) iD8DBQFD5RSuX0lh0JDNIpwRAocIAKCvxR4PujkceRo94NgbeCLFbAwwNgCfZl+6 ncz3/uxwGbmsAUe76oWDgGA= =pVEw -----END PGP SIGNATURE-----

Stefan Holdermans wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brian wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I don't think Haskell Prime should be about changing the look and feel of the language.
Perhaps it is just a matter of aesthetics about :: and :, but I really feel these symbols have a de-facto meaning that should have been respected and that Haskell Prime would be a chance to correct this error. However no doubt I'm alone in this view so fair enough - it's just syntax after all and I can run my own programs through a pre-processor if I want them the other way round... :-) Regards, Brian.

On 04/02/06, Brian Hulley
Stefan Holdermans wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brian wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I don't think Haskell Prime should be about changing the look and feel of the language.
Perhaps it is just a matter of aesthetics about :: and :, but I really feel these symbols have a de-facto meaning that should have been respected and that Haskell Prime would be a chance to correct this error. However no doubt I'm alone in this view so fair enough - it's just syntax after all and I can run my own programs through a pre-processor if I want them the other way round... :-)
Regards, Brian.
In Haskell, they have a de-facto meaning which is opposite to the one you're talking about :) Besides, lots of papers and various other programming languages use Haskell's convention (which was taken from Miranda). - Cale

On 2006-02-04 at 21:15GMT "Brian Hulley" wrote:
Stefan Holdermans wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
Brian wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I don't think Haskell Prime should be about changing the look and feel of the language.
Perhaps it is just a matter of aesthetics about :: and :, but I really feel these symbols have a de-facto meaning that should have been respected and that Haskell Prime would be a chance to correct this error. However no doubt I'm alone in this view so fair enough
Not exactly alone; I've felt it was wrong ever since we argued about it for the first version of Haskell. ":" for typing is closer to common mathematical notation. But it's far too late to change it now.
- it's just syntax after all
It is indeed. Jón -- Jón Fairbairn Jon.Fairbairn at cl.cam.ac.uk

Jon Fairbairn wrote:
Brian Hulley wrote:
<snip>
Not exactly alone; I've felt it was wrong ever since we argued about it for the first version of Haskell. ":" for typing is closer to common mathematical notation.
But it's far too late to change it now.
- it's just syntax after all
Well I'm reconsidering my position that it's "just" syntax. Syntax does after all carry a lot of semiotics for us humans, and if there are centuries of use of ":" in mathematics that are just to be discarded because someone in some other language decided to use it for list cons then I think it makes sense to correct this. It would be impossible to get everything right first time, and I think the Haskell committee did a very good job with Haskell, but just as there can be bugs in a program, so there can also be bugs in a language design, and an interesting question is how these can be addressed. For example, in the Prolog news group several years ago, there was also a discussion about changing the list cons operator, because Prolog currently uses "." which is much more useful for forming composite names - something which I also think has become a de-facto inter-language standard. Although there was much resistance from certain quarters, several implementations of Prolog had in fact changed their list cons operator (list cons is hardly ever needed in Prolog due to the [Head|Tail] sugar) to reclaim the dot for its "proper" use. My final suggestion if anyone is interested is as follows: 1) Use ":" for types 2) Use "," instead of ";" in the block syntax so that all brace blocks can be replaced by layout if desired (including record blocks) 3) Use ";" for list cons. ";" is already used for forming lists in natural language, and has the added advantage that (on my keyboard at least) you don't even need to press the shift key! ;-) Regards, Brian.

On 2006-02-05, Brian Hulley
Jon Fairbairn wrote:
Brian Hulley wrote:
<snip>
Not exactly alone; I've felt it was wrong ever since we argued about it for the first version of Haskell. ":" for typing is closer to common mathematical notation.
But it's far too late to change it now.
- it's just syntax after all
Well I'm reconsidering my position that it's "just" syntax. Syntax does after all carry a lot of semiotics for us humans, and if there are centuries of use of ":" in mathematics that are just to be discarded because someone in some other language decided to use it for list cons then I think it makes sense to correct this.
It would be impossible to get everything right first time, and I think the Haskell committee did a very good job with Haskell, but just as there can be bugs in a program, so there can also be bugs in a language design, and an interesting question is how these can be addressed.
For example, in the Prolog news group several years ago, there was also a discussion about changing the list cons operator, because Prolog currently uses "." which is much more useful for forming composite names - something which I also think has become a de-facto inter-language standard. Although there was much resistance from certain quarters, several implementations of Prolog had in fact changed their list cons operator (list cons is hardly ever needed in Prolog due to the [Head|Tail] sugar) to reclaim the dot for its "proper" use.
My final suggestion if anyone is interested is as follows:
1) Use ":" for types 2) Use "," instead of ";" in the block syntax so that all brace blocks can be replaced by layout if desired (including record blocks) 3) Use ";" for list cons. ";" is already used for forming lists in natural language, and has the added advantage that (on my keyboard at least) you don't even need to press the shift key! ;-)
Regards, Brian.
If anything, using ',' for block syntax and ';' for lists is backwards. ',' is used for generic lists in English, whereas ';' is used for seperating statements or lists. But I like the current syntax just fine. -- Aaron Denney -><-

On Sun, Feb 05, 2006 at 01:10:24PM -0000, Brian Hulley wrote:
2) Use "," instead of ";" in the block syntax so that all brace blocks can be replaced by layout if desired (including record blocks)
Wouldn't it be better to use ; instead of , also for record syntax? Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Tomasz Zielonka wrote:
On Sun, Feb 05, 2006 at 01:10:24PM -0000, Brian Hulley wrote:
2) Use "," instead of ";" in the block syntax so that all brace blocks can be replaced by layout if desired (including record blocks)
Wouldn't it be better to use ; instead of , also for record syntax?
I thought of this also, but the nice thing about using commas everywhere is that it is consistent with tuples and lists: [a,b,c] (a,b,c) {a,b,c} I admit it takes some getting used to to write: map f (h;t) = f h;map f t but you couldn't use commas in tuple syntax if they were also used as list cons. Also, I'm using test :{Eq a, Show a} a -> () instead of test :: (Eq a, Show a) => a->() and the comma here is particularly nice because it suggests a set, which is exactly what the context is. Regards, Brian.

"Brian Hulley"
My final suggestion if anyone is interested is as follows:
1) Use ":" for types 2) Use "," instead of ";" in the block syntax so that all brace blocks can be replaced by layout if desired (including record blocks) 3) Use ";" for list cons. ";" is already used for forming lists in natural language, and has the added advantage that (on my keyboard at least) you don't even need to press the shift key! ;-)
My language uses \ for cons, and ? for lambda. -- __("< Marcin Kowalczyk \__/ qrczak@knm.org.pl ^^ http://qrnik.knm.org.pl/~qrczak/

Actually, one of the main reasons that we chose (:) is that that's what Miranda used. So, at the time at least, it was not entirely clear what the "de facto universal inter-language standard" was. In any case, I agree with Stefan regarding Haskell Prime! -Paul Stefan Holdermans wrote:
Brian wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I don't think Haskell Prime should be about changing the look and feel of the language.
Regards,
Stefan

G'day all.
Quoting Paul Hudak
Actually, one of the main reasons that we chose (:) is that that's what Miranda used. So, at the time at least, it was not entirely clear what the "de facto universal inter-language standard" was.
Exactly. One point that's often not appreciated is that Haskell is not a descendent of ML. The ML lineage is, roughly: Lisp -> ISWIM -> ML -> SML, LML, O'Caml etc And the Haskell lineage is: Lisp -> ISWIM -> SASL -> KRC -> Miranda -> Haskell ML is much more like an older cousin than an ancestor. This point is important because Turner languages already had a list syntax at the time that they adopted an ML-like type system. Cheers, Andrew Bromage

These lineages are more or less right, except that there is a bit of incest: LML is certainly one of the progenitors of Haskell. (more semantically than syntactically, though) Cheers, --Joe ajb@spamcop.net said:
G'day all.
Quoting Paul Hudak
: Actually, one of the main reasons that we chose (:) is that that's what Miranda used. So, at the time at least, it was not entirely clear what the "de facto universal inter-language standard" was.
Exactly. One point that's often not appreciated is that Haskell is not a descendent of ML. The ML lineage is, roughly:
Lisp -> ISWIM -> ML -> SML, LML, O'Caml etc
And the Haskell lineage is:
Lisp -> ISWIM -> SASL -> KRC -> Miranda -> Haskell
ML is much more like an older cousin than an ancestor.
This point is important because Turner languages already had a list syntax at the time that they adopted an ML-like type system.
Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list Haskell-Cafe@haskell.org http://www.haskell.org/mailman/listinfo/haskell-cafe
Joseph H. Fasel, Ph.D. email: jhf@lanl.gov Stockpile-Complex Modeling and Analysis phone: +1 505 667 7158 University of California fax: +1 505 667 2960 Los Alamos National Laboratory post: D-2 MS F609; Los Alamos, NM 87545

Tomasz Zielonka wrote:
On Sat, Feb 04, 2006 at 07:15:47PM -0000, Brian Hulley wrote:
I think the mystery surrounding :: and : might have been that originally people thought type annotations would hardly ever be needed whereas list cons is often needed, but now that it is regarded as good practice to put a type annotation before every top level value binding, and as the type system becomes more and more complex (eg with GADTs etc), type annotations are now presumably far more common than list cons so it would be good if Haskell Prime would swap these operators back to their de facto universal inter-language standard of list cons and type annotation respectively.
I am not convinced. Even if you really want to write types for every top-level binding, it's only one :: per binding, which can have a definition spanning for many lines and as complicated type as you want. On the other hand, when you are doing complicated list processing, it is not uncommon to have four (or more) :'s per _line_.
I wonder if extending the sugared list syntax would help here. The | symbol is used for list comprehensions but something along the lines of: [a,b,c ; tail] === a :: b :: c :: tail -- where :: means list cons then there would seldom be any need to use the list cons symbol anywhere except for sections. I would use "," instead of ";" in the block syntax so that ";" could be freed for the above use and so that there would be a generic block construct {,,,} that could be used for records also (and could always be replaced by layout) eg P {x=5, y=6} could be written also as P # -- # allows a layout block to be started x = 5 y = 6
Personally, I started my FP adventure with OCaml (which has the thing the other way around), and I felt that the meanings of :: and : should be reversed - before I even knew Haskell!
I see what you mean ;-). However the swapping of :: and : really is very confusing when one is used to things being the other way round. Also in natural language, ":" seems to have a much closer resonance with the type/kind annotation meaning than with constructing a list. I also wonder if it is such a good idea to make lists so special? Does this influence our thinking subconciously to use list-based solutions when some other data structure may be better? Regards, Brian.

[a,b,c ; tail] === a :: b :: c :: tail -- where ::
How is [a,b,c ; tail] simpler, clearer or less typing than a:b:c:tail ? I think that the commas and semicolons are easy to confuse. While we're talking about the aesthetics of "::" and ":", I like how a line with a type annotation stands out strongly with "::", e.g. map :: (a -> b) -> [a] -> [b] Compare this to map : (a -> b) -> [a] -> [b] where the identifier looks more connected to the type. You will notice this is different than ML anyway because in Haskell you can separate the type annotation and the declaration. If you are designing your own langauge, you will of course have your own aesthetics and reasons for doing it your way. As for me, I started to design (in my head) the "perfect" language (for me), but the more I learned and used Haskell, the more I realized how carefully designed it was and how it was better for me to use my efforts to learn from Haskell (especially conceptually, since the syntax is so transparent and the ideas are so amazing) than to try to insert clever ideas to satisfy my own whims. Sure, there are always little things to nitpick, but on the whole, can you think of more succinct language with more power? (and less parentheses!) Plus, what other languages let you more easily add (infix) operators, etc. and change things to fit your whim, anyway (and still be strongly type!). Cheers, Jared. -- http://www.updike.org/~jared/ reverse ")-:"

Jared Updike wrote:
[a,b,c ; tail] === a :: b :: c :: tail -- where ::
How is [a,b,c ; tail] simpler, clearer or less typing than a:b:c:tail ? I think that the commas and semicolons are easy to confuse.
It seems strange that you can write [a,b,c] with a nice list sugar but if you want to include a tail you have to switch to the infix notation using list cons. Prolog for example allows you to write [a,b,c|Tail] but neither Haskell nor ML allows this. In Haskell, | is used to introduce a list comprehension so I was just trying to find a replacement symbol for when you want the equivalent of the Prolog list sugar so that you wouldn't be forced to use infix notation. All this was not to replace a:b:c:tail but was to replace a::b::c::tail so that : could be used for type annotations instead.
While we're talking about the aesthetics of "::" and ":", I like how a line with a type annotation stands out strongly with "::", e.g. map :: (a -> b) -> [a] -> [b] Compare this to map : (a -> b) -> [a] -> [b] where the identifier looks more connected to the type.
This is no doubt because you are used to thinking of : as meaning list cons whereas I find the :: notation confusing for precisely the same reason.
If you are designing your own langauge, you will of course have your own aesthetics and reasons for doing it your way. As for me, I started to design (in my head) the "perfect" language (for me), but the more I learned and used Haskell, the more I realized how carefully designed it was and how it was better for me to use my efforts to learn from Haskell (especially conceptually, since the syntax is so transparent and the ideas are so amazing) than to try to insert clever ideas to satisfy my own whims. Sure, there are always little things to nitpick, but on the whole, can you think of more succinct language with more power? (and less parentheses!) Plus, what other languages let you more easily add (infix) operators, etc. and change things to fit your whim, anyway (and still be strongly type!).
I've spent at least 3 years looking into various languages (Prolog, ML, Scheme, Lisp, Logo, Smalltalk, Mozart) before finally arriving at the pleasant shores of Haskell which has an incredibly well thought out and neat syntax not to mention a powerful type system. However there are some things that are just plain crazy, such as the existing layout rule which forces you to use a monospaced font when it would have been so simple to make a simpler layout rule using tabs that would result in grammatically robust code in the face of identifier renamings and editor font choices (as I've indicated on other threads), and the way constructors and field labels for different types are allowed to collide with each other in a module's namespace. Feedback from this forum has been invaluable in helping me find a solution to this second problem and arrive at a good field selection syntax with semantics based on a compiler-defined global typeclass for each field label, although it has taken at least 3 months to discover this... There are some funny things with the semantics I'm puzzled about - such as why Haskell is still based on Hindley Milner type inference with its troublesome monomorphism restrictions instead of intersection types which would allow everything to be mutually recursive with separate compilation of modules etc, but AFAIK no other language uses intersection types yet either, and Haskell is at least heading in the right direction with arbitrary rank forall polymorphism. So I agree that Haskell is one of the best languages in existence at the moment all things considered, especially because it is a very good place to start when trying to develop the "perfect" language. Regards, Brian.

Brian Hulley wrote:
Jared Updike wrote:
[a,b,c ; tail] === a :: b :: c :: tail -- where ::
How is [a,b,c ; tail] simpler, clearer or less typing than a:b:c:tail ? I think that the commas and semicolons are easy to confuse.
It seems strange that you can write [a,b,c] with a nice list sugar but if you want to include a tail you have to switch to the infix notation using list cons. Prolog for example allows you to write [a,b,c|Tail] but neither Haskell nor ML allows this. In Haskell, | is used to introduce a list comprehension so I was just trying to find a replacement symbol for when you want the equivalent of the Prolog list sugar so that you wouldn't be forced to use infix notation.
All this was not to replace a:b:c:tail but was to replace a::b::c::tail so that : could be used for type annotations instead.
There is the .. operator which is unused in pattern matching contexts. So maybe case [1,3..] of [a,b,c,tail..] -> tail -- I like this one, the ..] catches the eye better [a,b,c,..tail] -> tail -- I think this is less clear at a glance [a,b,c,..tail..] -> tail -- I expect no one to like this [a,b,c,_..] -> [a,b,c] -- Not the best looking thing I've ever seen [a,b,c,.._] -> [a,b,c] -- ditto [a,b,c,.._..] -> [a,b,c] -- ick But this implies [a,b,c,[]..] is the same as [a,b,c] and [a,b,c,[d,e,f]..] is the same as [a,b,c,d,e,f] and [a,b,c,[d,e,f..]..] is [a,b,c,d,e,f..] Wacky.

On Sat, 2006-02-04 at 23:34 +0000, Chris Kuklewicz wrote: . . .
But this implies [a,b,c,[]..] is the same as [a,b,c] and [a,b,c,[d,e,f]..] is the same as [a,b,c,d,e,f] and [a,b,c,[d,e,f..]..] is [a,b,c,d,e,f..]
Hmmm, does this get us to difference lists ala Prolog? -- Bill Wood

G'day all.
Quoting Tomasz Zielonka
Probably it was anticipated that right associative version will be more useful. You can use it to create a chain of transformations, similar to a chain of composed functions:
(f . g . h) x = f $ g $ h $ x
Of course, if $ were left-associative, it would be no less useful here, because you could express this chain thusly: f . g . h $ x This is the way that I normally express it. Partly because I find function application FAR more natural than right-associative application, and partly because I'm hedging my bets for Haskell 2 just in case the standards committee wakes up and notices that the associativity of $ is just plain wrong and decides to fix it. :-) In fact, I'll go out on a limb and claim that ALL such uses of $ are better expressed with composition. Anyone care to come up with a counter-example?
But of course, left associative version can also be useful. Some time ago I used a left associative version of the strict application operator, which I named (!$).
In fact, I think it's much MORE useful, and for precisely the reason that you state: it makes strict application much more natural. Strict application also has the wrong associativity. As it is, $! is only useful if the _last_ argument of a function needs to be strict. I find that ordering my arguments in a de Bruijn-like order (which many experienced functional programmers do unconsciously) results in this being the least common case. The last argument of a function is usually the induction argument: it's almost invariably the subject of a top-level test. The strictness analyser invariably picks up that the argument is strict. It's the OTHER arguments you may need to evaluate early. Suppose you have a function with three arguments, the second of which needs to be strict. I want to write something like this: f (g x) $! (h y) $ (j z) What I have to write is this: (f (g x) $! (h y)) (j z) or this: let y' = h y in y' `seq` f (g x) y' (j z)
Anyway, you can't always remove all parentheses. And why would you want to? Everybody is used to them.
I agree. However, sometimes parentheses make things more confusing. Almost always the best solution is to give the offending subexpression a name, using "let" or "where". However, the specific case above is the only one that I've found where this, too, makes things worse. In summary: There is no good reason to make $ right-associative and at least one good reason to make it left-associative. Cheers, Andrew Bromage

G'day all. Quoting ajb@spamcop.net:
This is the way that I normally express it. Partly because I find function application FAR more natural than right-associative application,
I meant to say that I find function COMPOSITION more natural than right-associative application. It certainly fits better with my personal biases about good functional programming style. Cheers, Andrew Bromage

ajb@spamcop.net wrote:
G'day all.
Quoting ajb@spamcop.net:
This is the way that I normally express it. Partly because I find function application FAR more natural than right-associative application,
I meant to say that I find function COMPOSITION more natural than right-associative application. It certainly fits better with my personal biases about good functional programming style.
Yes the case you've made for $ being left associative is very compelling - namely that the existing associativity actively encourages a *bad* programming style in which the right associative $ hides the composition in a chain of function applications instead of allowing the composition to be explicit and neatly separate from its argument. Moreover, the existing associativity of $ implies that whoever thought it up was confusing two concepts: application and composition, instead of allowing "$" to take its proper place as an equal citizen to ".", with the associativity proper to its role as application alone. Thus if $ were made left associative in Haskell Prime, this would add clarity to the thought forms associated with the language, which would (presumably) in turn lead to better programs being written in it. Regards, Brian.

On Sat, Feb 04, 2006 at 07:02:52PM -0500, ajb@spamcop.net wrote:
G'day all.
Hello!
Quoting Tomasz Zielonka
: Probably it was anticipated that right associative version will be more useful. You can use it to create a chain of transformations, similar to a chain of composed functions:
(f . g . h) x = f $ g $ h $ x
Of course, if $ were left-associative, it would be no less useful here, because you could express this chain thusly:
f . g . h $ x
OK, I can be persuaded to use this style. I like function composition much more than $ :-)
This is the way that I normally express it. Partly because I find function application FAR more natural than right-associative application, and partly because I'm hedging my bets for Haskell 2 just in case the standards committee wakes up and notices that the associativity of $ is just plain wrong and decides to fix it. :-)
Is there any chance that Haskell' will change the definition of $ ? Well, if there is any moment where we can afford introducing backward incompatible changes to Haskell', I think it's now or never!
In fact, I'll go out on a limb and claim that ALL such uses of $ are better expressed with composition. Anyone care to come up with a counter-example?
The only problem I see right now is related to change locality. If I have a chain like this: f x y . g x $ z and I want to add some transformation between g and z I have to change one line and insert another f x y . g x . h x y $ z With right-associative $ it would be only one line-add. Probably not a very strong argument.
But of course, left associative version can also be useful. Some time ago I used a left associative version of the strict application operator, which I named (!$).
In fact, I think it's much MORE useful, and for precisely the reason that you state: it makes strict application much more natural.
Agreed. Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Tomasz Zielonka wrote:
The only problem I see right now is related to change locality. If I have a chain like this:
f x y . g x $ z
and I want to add some transformation between g and z I have to change one line and insert another
f x y . g x . h x y $ z
With right-associative $ it would be only one line-add. Probably not a very strong argument.
How about: f x y . g x $ z then you only need to add the line . h x y This is similar to how people often format lists: a = [ first , second , third ] Regards, Brian.

On Sun, Feb 05, 2006 at 01:14:42PM -0000, Brian Hulley wrote:
How about:
f x y . g x $ z
then you only need to add the line
. h x y
But then you have a problem when you when you want to add something at the beginning ;-) With right-assoc $ adding at both ends is OK.
This is similar to how people often format lists:
a = [ first , second , third ]
I am one of those people, and I am slightly annoyed with I have to add something at the beginning of the list. I even went so far that when I had a list of lists, which were concatenated, I've put an empty list at front: concat $ [ [] , [...] , [...] . . . ] Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Tomasz Zielonka wrote:
On Sun, Feb 05, 2006 at 01:14:42PM -0000, Brian Hulley wrote:
How about:
f x y . g x $ z
then you only need to add the line
. h x y
But then you have a problem when you when you want to add something at the beginning ;-) With right-assoc $ adding at both ends is OK.
This is similar to how people often format lists:
a = [ first , second , third ]
I am one of those people, and I am slightly annoyed with I have to add something at the beginning of the list. I even went so far that when I had a list of lists, which were concatenated, I've put an empty list at front:
concat $ [ [] , [...] , [...] . . . ]
Just in case you are interested, in the "preprocessor" I'm writing, I would write these examples as: (.) #> f x y g x h x y $ z and a = #[ first second third where exp #> {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a ... ) ...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...] (exp #> block and exp #< block are the right and left associative versions respectively and the special # sugar allows a layout block to be started if it occurs at the end of a line) This allows me to avoid having to type lots of syntax eg repeating the "." all the time and focus on the semantics... Regards, Brian.

On Sun, Feb 05, 2006 at 04:36:44PM -0000, Brian Hulley wrote:
Just in case you are interested, in the "preprocessor" I'm writing, I would write these examples as:
(.) #> f x y g x h x y $ z
and a = #[ first second third
where exp #> {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a ... ) ...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...] (exp #> block and exp #< block are the right and left associative versions respectively and the special # sugar allows a layout block to be started if it occurs at the end of a line)
Well... I care about change locality and the like, but I'm not sure I would use such syntax (as a means of communication between programmers). Perhaps that's because I am not used to it and it looks alien. But it's rather because I still put readability first.
This allows me to avoid having to type lots of syntax eg repeating the "." all the time and focus on the semantics...
At some point you (the programmer) are going to do the work of a compression program ;-) There is some limit to terseness. Haskell's syntax is quite concise, but it could be even more. Why it isn't? Because it would cease to resemble the mathematical notation, it would cease to be readable. Well, even Haskell could be more readable, but there's also some point where further investment in concise lexical syntax doesn't pay off. I am not sure that's the situation here, but... think about it. PS. One wonders why you don't take the lisp way with a good lisp editor? Aren't you designing lisp without parentheses? ;-) Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

Tomasz Zielonka wrote:
On Sun, Feb 05, 2006 at 04:36:44PM -0000, Brian Hulley wrote:
Just in case you are interested, in the "preprocessor" I'm writing, I would write these examples as:
(.) #> f x y g x h x y $ z
and a = #[ first second third
where exp #> {e0,e1,...} is sugar for let a = exp in a e0 (a e1 (a ... ) ...)) and #[ {e0, e1, ... } is sugar for [e0, e1, ...] (exp #> block and exp #< block are the right and left associative versions respectively and the special # sugar allows a layout block to be started if it occurs at the end of a line)
Well... I care about change locality and the like, but I'm not sure I would use such syntax (as a means of communication between programmers). Perhaps that's because I am not used to it and it looks alien. But it's rather because I still put readability first.
It is true that it looks quite alien at first, but consider that it allows you to use longer identifiers for function names (because they now only need to be written once) which could actually enhance readability eg Prelude.compose #> f x y g x h x y $ z so perhaps people would start using more real words instead of obscure symbols like >=+=< etc. Also, the less use of infix notation the better, because every infix symbol requires the reader to search for the fixity declaration then try to simulate a precedence parser at the same time as grappling with the semantics of the code itself. The #>, #< notation solves this problem by making the sugared associativity immediately visible, and the use of layout further enhances the direct visual picture of what's happening. Anyway it's just an idea I thought I'd share- I'm sure there's no danger of it ever ending up in a future Haskell... ;-) Regards, Brian.

Tomasz Zielonka wrote:
On Sun, Feb 05, 2006 at 01:14:42PM -0000, Brian Hulley wrote:
How about:
f x y . g x $ z
But then you have a problem when you when you want to add something at the beginning ;-)
How about: id . f x y . g x $ z -- Ben

On Sun, Feb 05, 2006 at 06:58:15PM +0000, Ben Rudiak-Gould wrote:
Tomasz Zielonka wrote:
But then you have a problem when you when you want to add something at the beginning ;-)
How about:
id . f x y . g x $ z
Yes, I've thought about it. You are using a neutral element of ., just like I used [] as a neutral element of ++ (or concat). Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

On Sun, 5 Feb 2006, Tomasz Zielonka wrote:
On Sun, Feb 05, 2006 at 01:14:42PM -0000, Brian Hulley wrote:
This is similar to how people often format lists:
a = [ first , second , third ]
I am one of those people, and I am slightly annoyed with I have to add something at the beginning of the list.
In this case I prefer the non-sugar variant: a = first : second : third : []

On Sun, 2006-02-05 at 13:49 +0100, Tomasz Zielonka wrote: . . .
and I want to add some transformation between g and z I have to change one line and insert another
f x y . g x . h x y $ z
With right-associative $ it would be only one line-add. Probably not a very strong argument.
Maybe stronger than you think. I know that one of the arguments for making ";" a C-style delimiter rather than a Pascal-style separator is that adding a new statement at the end of a series is error-prone -- one tends to forget to add the ";" in front of the new statement (and one reason Pascal syntax included the "null" statement was so that "s1;" would parse as "s1; null", making ";" a de facto delimiter). Editing ease matters more than a little. -- Bill Wood

G'day all.
Quoting Tomasz Zielonka
Is there any chance that Haskell' will change the definition of $ ?
Well, if there is any moment where we can afford introducing backward incompatible changes to Haskell', I think it's now or never!
I'm not convinced about this. The purpose of Haskell', as I understand it, is to fix the problem that no large Haskell programs (to a first approximation) are valid H98 because they require some quite reasonable language extensions. Partly this is because research is an ongoing area. Partly this is because the purpose of H98 was to make a simple language suitable for teaching. There _is_ a time coming when H98's true successor will need to be made. I'm not convinced that that time is now or never. Cheers, Andrew Bromage

On 2/4/06, Brian Hulley
Does anyone know why this strange associativity was chosen?
I think it's very natural. Everything after the $, including other $
expressions, is applied to the stuff before the $. This saves me from
a lot of nested parentheses.
It seems to be that the left-associative version of $ does not
decrease nesting level so effectively.
--
Taral

Taral wrote:
I think it's very natural. Everything after the $, including other $ expressions, is applied to the stuff before the $. This saves me from a lot of nested parentheses.
To me, ($) helping me to avoid writing lots of parentheses, makes it extremely useful. Actually: except for passing function application to higher-order functions, this is the only way I use it. So, I always thought parentheses were *the* reason for the right- associativity of ($). Not sure if it really was originally, but, ever so, I think it is the best reason. Regards, Stefan

On Sat, Feb 04, 2006 at 08:37:51PM +0100, Stefan Holdermans wrote:
Taral wrote:
I think it's very natural. Everything after the $, including other $ expressions, is applied to the stuff before the $. This saves me from a lot of nested parentheses.
To me, ($) helping me to avoid writing lots of parentheses, makes it extremely useful. Actually: except for passing function application to higher-order functions, this is the only way I use it. So, I always thought parentheses were *the* reason for the right- associativity of ($). Not sure if it really was originally, but, ever so, I think it is the best reason.
A left-associative low-precedence application operator can also help avoid writing parentheses, only in different cases, eg. f $$ x + 1 $$ x * x + 2 * x + 1 equals f (x + 1) (x * x + 2 * x + 1) But in this case the parentheses don't nest, which may be a reason why a right-associative version was chosen. ($) helps to avoid the case of nesting parentheses. Such nesting is unbounded, for example you can have chains like this with arbitrary length: a (b (c (d (e (f x))))) even if you only have unary functions. Also, adding or removing a function in such a chain can require non-local changes, that is you are forced to add or remove a closing parenthesis on the end of expression. If you use ($): a $ b $ c $ d $ e $ f x you can easily add or remove a function in the chain. On the other hand, adding new parameters to calls like this f (x + 1) (y - 1) ... is very localised. Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland

No one has mentioned yet that it's easy to change the associativity of $ within a module in Haskell 98: import Prelude hiding (($)) infixl 0 $ f$x = f x or, for the purists, import Prelude hiding (($)) import qualified Prelude (($)) infixl 0 $ ($) = (Prelude.$) -- Ben

On Sun, Feb 05, 2006 at 02:27:45AM +0000, Ben Rudiak-Gould wrote:
No one has mentioned yet that it's easy to change the associativity of $ within a module in Haskell 98:
import Prelude hiding (($))
infixl 0 $ f$x = f x
or, for the purists,
import Prelude hiding (($)) import qualified Prelude (($))
infixl 0 $ ($) = (Prelude.$)
But that would break Copy & Paste between modules! ;-) Best regards Tomasz -- I am searching for programmers who are good at least in (Haskell || ML) && (Linux || FreeBSD || math) for work in Warsaw, Poland
participants (16)
-
Aaron Denney
-
ajb@spamcop.net
-
Ben Rudiak-Gould
-
Bill Wood
-
Brian Hulley
-
Cale Gibbard
-
Chris Kuklewicz
-
Henning Thielemann
-
Jared Updike
-
Jon Fairbairn
-
Joseph H. Fasel III
-
Marcin 'Qrczak' Kowalczyk
-
Paul Hudak
-
Stefan Holdermans
-
Taral
-
Tomasz Zielonka