
Can we agree that it's possible to go too far? Even in the global scope. https://github.com/Quotation/LongestCocoa. I think another factor that comes into play is how frequently you use the function/value. If it comes up only once in a while, then every time someone encounters it they are likely to have to remind themselves what all the abreviations mean. If it comes up more frequently, then they likely already saw it 10 lines ago, and remember. If it comes up very frequently, then whatever name you assign it takes on its own meaning. Think about `map` or, in bash land `ls` and `cd`. Unless you just started using bash, you're not going to think "Oh. cd stands for 'change directory'". It's just… cd. And at the same time, using a long name for something that's going to come up every couple of lines is, IMO, distracting. Having all of the information be within what your eye can easily take in at once has huge advantages. Not to mention, can you imagine writing `changeDirectory` and `printDirectoryContents` every time? It's a balancing act. Obviously, the opposite extreme carries its own problems. --Taeer

Command shell is a very old and peculiar human interface — if human at all. They had 80 character wide screens and visibly lagging connexion. We have retina screens, GPU accelerated rendering and magical auto-completion. Imagine those commands were buttons that you could press. Would you prefer a button to say _«`ls`»_ or _«list files»_, _«`cd`»_ or _«change directory»_? For a vivid example, imagine a web site where you have `lgn` and `plrq` instead of _«log in»_ and _«pull request»_. Would you like that? Getting back to Mathematics — this is where abstraction and notation come in. We can give names to things and we can use scoping. But neither mathematicians nor system administrators invent new terminology for every next paper or script — maybe a few key words. Industrial programming is yet another thing. I imagine when you have a record with a hundred fields it pays off to have longish field labels. And you cannot expect the next person to read your code from top to bottom so that they get used to its peculiar vocabulary. For example, today I am merging two branches of a 10 thousand lines code base that diverged last Spring. Guessing the difference between `rslt` and `res` is the last thing I need. You do not need to call your context _«`ctx`»_ and your result _«`rslt`»_ — there are already words for it. There was a time when every character needed to be typed and there was a rectangle of only some 80×25 characters visible at once. Things have changed. I can have two terminals of 119×61 characters each with a fixed width font, and perhaps twice that with proportional. The number of characters it takes to type an identifier in a smart code editor is proportional to the logarithm of the number of identifiers in scope, not to their length.

On 2020-09-21 00:30, Ignat Insarov wrote:
We have retina screens, GPU accelerated rendering and magical auto-completion.
Who's "we"? 1 and 2 are constrained by money. 1 and 3 are also constrained by personal preference. I need a _big_ screen because I also do photo processing beside coding, and I can't afford 2 screens, one big and another high density. And I use completion only by explicit request (ie. a specific keybinding), because it isn't good enough (and can't be) to be always right, and for me the stress from correcting it when wrong outweighs the benefit of the common case, when right. Veering off-topic, hence redirecting replies to myself. -- Ian

My guess is that most of Café members would agree that if given a choice between buttons with short labels and buttons with long labels they would ask to revert back to CLI. Maybe I'm wrong here, but I think Unix-style CLI is not going anywhere any time soon, regardless of retina screens and any other bells and whistles of modern GUIs. Don't get me wrong, GUIs are great for a lot of specific tasks, but CLI still outshines them in many areas, and I think short command names are a part of a reason for that. Not just command names, in fact; in a one-liner I would rather write "while read a; do cp $a...; done" than "while read filename; do cp $filename...; done". They are exactly the same, but it's simply easier to deal with a short line — read it, edit it etc. And, as Haskell makes it easier to use and control local variables, I think it's even more forgiving about short names. In C you'd need a descriptive name simply to make sure you don't use your loop counter somewhere inside the loop; but in Haskell, due to immutability, you don't usually have to worry.
On 20 Sep 2020, at 21:30, Ignat Insarov
wrote: Command shell is a very old and peculiar human interface — if human at all. They had 80 character wide screens and visibly lagging connexion. We have retina screens, GPU accelerated rendering and magical auto-completion. Imagine those commands were buttons that you could press. Would you prefer a button to say _«`ls`»_ or _«list files»_, _«`cd`»_ or _«change directory»_? For a vivid example, imagine a web site where you have `lgn` and `plrq` instead of _«log in»_ and _«pull request»_. Would you like that?
Getting back to Mathematics — this is where abstraction and notation come in. We can give names to things and we can use scoping. But neither mathematicians nor system administrators invent new terminology for every next paper or script — maybe a few key words. Industrial programming is yet another thing. I imagine when you have a record with a hundred fields it pays off to have longish field labels. And you cannot expect the next person to read your code from top to bottom so that they get used to its peculiar vocabulary. For example, today I am merging two branches of a 10 thousand lines code base that diverged last Spring. Guessing the difference between `rslt` and `res` is the last thing I need. You do not need to call your context _«`ctx`»_ and your result _«`rslt`»_ — there are already words for it.
There was a time when every character needed to be typed and there was a rectangle of only some 80×25 characters visible at once. Things have changed. I can have two terminals of 119×61 characters each with a fixed width font, and perhaps twice that with proportional. The number of characters it takes to type an identifier in a smart code editor is proportional to the logarithm of the number of identifiers in scope, not to their length.

One's capacity to write (and then be able to read) lots of little functions
is greatly helped by a tool that lets you jump to the definition of a term.
Emacs + hasktags, for instance.
In addition to names, moving as much logic from possible into the types is
helpful. You're liable to update a function and leave a stale comment; not
so its type signature. With good types and names, I find that almost the
only things I need to comment are surprises, pitfalls, gotchas. (And with
dependent types I'm sure I'd need fewer of those.)
Time spent rewriting with no other goal in mind is, if you're going to keep
coming back to the code, generally time well spent. Anything that confuses
you when you revisit code deserves at the least a comment, if not rewriting
the code itself.
Naming *should* be hard. A good name defines for a human in a few words
what takes a paragraph to define for the computer. It almost requires a
different mindset, a second level of awareness -- less how, more what and
why.
On Sun, Sep 20, 2020 at 4:08 PM MigMit
My guess is that most of Café members would agree that if given a choice between buttons with short labels and buttons with long labels they would ask to revert back to CLI. Maybe I'm wrong here, but I think Unix-style CLI is not going anywhere any time soon, regardless of retina screens and any other bells and whistles of modern GUIs. Don't get me wrong, GUIs are great for a lot of specific tasks, but CLI still outshines them in many areas, and I think short command names are a part of a reason for that.
Not just command names, in fact; in a one-liner I would rather write "while read a; do cp $a...; done" than "while read filename; do cp $filename...; done". They are exactly the same, but it's simply easier to deal with a short line — read it, edit it etc.
And, as Haskell makes it easier to use and control local variables, I think it's even more forgiving about short names. In C you'd need a descriptive name simply to make sure you don't use your loop counter somewhere inside the loop; but in Haskell, due to immutability, you don't usually have to worry.
On 20 Sep 2020, at 21:30, Ignat Insarov
wrote: Command shell is a very old and peculiar human interface — if human at all. They had 80 character wide screens and visibly lagging connexion. We have retina screens, GPU accelerated rendering and magical auto-completion. Imagine those commands were buttons that you could press. Would you prefer a button to say _«`ls`»_ or _«list files»_, _«`cd`»_ or _«change directory»_? For a vivid example, imagine a web site where you have `lgn` and `plrq` instead of _«log in»_ and _«pull request»_. Would you like that?
Getting back to Mathematics — this is where abstraction and notation come in. We can give names to things and we can use scoping. But neither mathematicians nor system administrators invent new terminology for every next paper or script — maybe a few key words. Industrial programming is yet another thing. I imagine when you have a record with a hundred fields it pays off to have longish field labels. And you cannot expect the next person to read your code from top to bottom so that they get used to its peculiar vocabulary. For example, today I am merging two branches of a 10 thousand lines code base that diverged last Spring. Guessing the difference between `rslt` and `res` is the last thing I need. You do not need to call your context _«`ctx`»_ and your result _«`rslt`»_ — there are already words for it.
There was a time when every character needed to be typed and there was a rectangle of only some 80×25 characters visible at once. Things have changed. I can have two terminals of 119×61 characters each with a fixed width font, and perhaps twice that with proportional. The number of characters it takes to type an identifier in a smart code editor is proportional to the logarithm of the number of identifiers in scope, not to their length.
_______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post.
-- Jeff Brown | Jeffrey Benjamin Brown LinkedIn https://www.linkedin.com/in/jeffreybenjaminbrown | Github https://github.com/jeffreybenjaminbrown | Twitter https://twitter.com/carelogic | Facebook https://www.facebook.com/mejeff.younotjeff | very old Website https://msu.edu/~brown202/

Ignat Insarov
Command shell is a very old and peculiar human interface — if human at all. They had 80 character wide screens and visibly lagging connexion. We have retina screens, GPU accelerated rendering and magical auto-completion.
You might have that. I, as a blind braille display user, have one line of at most 80 characters. When I am traveling, I typically only have 40 characters. Given these constraints, I still consider a command line interface superior to any sort of buttonized nonesense. But thats just me, apparently. -- CYa, ⡍⠁⠗⠊⠕
participants (6)
-
Ian Zimmerman
-
Ignat Insarov
-
Jeffrey Brown
-
Mario Lang
-
MigMit
-
Taeer Bar-Yam