On Sun, Apr 27, 2014 at 9:02 PM, Brandon Allbery <allbery.b@gmail.com> wrote:
On Sun, Apr 27, 2014 at 7:45 AM, Rustom Mody <rustompmody@gmail.com> wrote:
1. I'd like to underscore the 'arbitrary'.  Why is ASCII any less arbitrary -- apart from an increasingly irrelevant historical accident -- than Arabic, Bengali, Cyrillic, Deseret? [Hint: Whats the A in ASCII?]  By contrast math may at least have some pretensions to universality?

Math notations are not as universal as many would like to think, sadly.

And I am not sure the historical accident is really irrelevant; as the same "accident" was involved in most of the computer languages and protocols we use daily, I would not be at all surprised to find that there are subtle dependencies buried in the whole mess --- similar to how (most... sigh) humans pick up language and culture signals as children too young to apply any kind of critical analysis to it, and can have real problems trying to eradicate or modify them later. (Yes, languages can be fixed. But how many tools do you use when working with them? It's almost certainly more than the ones that immediately come to mind or are listed on e.g. Hackage. In particular, that ligature may be great in your editor and unfortunate when you pop a terminal and grep for it --- especially if you start extending this to other languages so you need a different set of ligatures [a different font!] for each language....)


Nice point!

And as I said above that Pandora's box is already wide open for current Haskell.
[And python and probably most modern languages] Can we reverse it??


Witness:

----------------------
$ ghci
GHCi, version 7.6.3: http://www.haskell.org/ghc/  :? for help
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
Prelude> let (fine,fine) = (1,2)
Prelude> (fine,fine)
(1,2)
Prelude>

---------------------

If you had the choice would you allow that f-i ligature to be thus confusable with the more normal fi?  I probably wouldn't but nobody is asking us and the water that's flowed under the bridge cannot be 'flowed' backwards (to the best of my knowledge!)

In case that seems far-fetched consider the scenario:
1. Somebody loads (maybe innocently) the code involving variables like 'fine'
into a 'ligature-happy 'IDE/editor'
2. The editor quietly changes all the fine to fine.
3. Since all those variables are in local scope nothing untoward is noticed
4. Until someone loads it into an 'old-fashioned' editor... and then...

Would you like to be on the receiving end on such 'fun'?

IOW the choice
"Ascii is the universal bedrock of computers -- best to stick with it"
vs
"Ascii is arbitrary and parochial and we SHOULD move on"

is not a choice at all. We (ie OSes, editors, languages) have all already moved on. And moved on in a particularly ill-considered way.

For example there used to be the minor nuisance that linux filesystems were typically case-sensitive, windows case-insensitive.

Now with zillions of new confusables like the Latin vs Cyrillic а vs a -- well we have quite a mess!

Embracing math in a well-considered and systematic way does not increase the mess; it can even reduce it.

My 2 (truly American) ¢
Rusi

PS Someone spoke of APL and someone else said Agda/Idris may be more relevant.  I wonder how many of the younger generation have heard of squiggol?