
Command shell is a very old and peculiar human interface — if human at all. They had 80 character wide screens and visibly lagging connexion. We have retina screens, GPU accelerated rendering and magical auto-completion. Imagine those commands were buttons that you could press. Would you prefer a button to say _«`ls`»_ or _«list files»_, _«`cd`»_ or _«change directory»_? For a vivid example, imagine a web site where you have `lgn` and `plrq` instead of _«log in»_ and _«pull request»_. Would you like that? Getting back to Mathematics — this is where abstraction and notation come in. We can give names to things and we can use scoping. But neither mathematicians nor system administrators invent new terminology for every next paper or script — maybe a few key words. Industrial programming is yet another thing. I imagine when you have a record with a hundred fields it pays off to have longish field labels. And you cannot expect the next person to read your code from top to bottom so that they get used to its peculiar vocabulary. For example, today I am merging two branches of a 10 thousand lines code base that diverged last Spring. Guessing the difference between `rslt` and `res` is the last thing I need. You do not need to call your context _«`ctx`»_ and your result _«`rslt`»_ — there are already words for it. There was a time when every character needed to be typed and there was a rectangle of only some 80×25 characters visible at once. Things have changed. I can have two terminals of 119×61 characters each with a fixed width font, and perhaps twice that with proportional. The number of characters it takes to type an identifier in a smart code editor is proportional to the logarithm of the number of identifiers in scope, not to their length.