
On 20/04/2017, at 8:21 PM, Joachim Durchholz
wrote: Truly strange. You really did not need "horsepower" to use Lisp in practice. There were several Lisp implementations for CP/M, and CP/M machines were hardly "raw horsepower" by any stretch of the imagination.
If that had been universally true, nobody would have bothered to think about, let alone buy those Lisp machines.
You seem to be confusing the horsepower required to run *LISP* with the horsepower required to run *an IDE*. I never used an MIT-family Lisp machine, but I spent a couple of years using Xerox Lisp machines. Those machines had 16-bit (yes, 16-bit) memory buses (so to fetch one tag+pointer "word" of memory took three memory cycles: check page table, load half word, load other half), and into 4 MB of memory fitted - memory management (Lisp) - device drivers (Lisp) - network stack (Lisp) - (networked) file system (Lisp) - bit mapped graphics (Lisp; later models added colour) - structure editor (Lisp) - WYSIWIG text editor with 16-bit character set (Lisp) - compiler (Lisp) - automatic error correction DWIM (Lisp) - interactive cross reference MasterScope (Lisp) - dynamic profiling better than anything I've used since (Lisp) and -- this is what I was working on -- - 1 MB dedicated to Prolog, including Prolog compiler (Prolog) In terms of memory capacity and "horsepower", the D-machines were roughly comparable to an M68010-based Sun workstation. But the memory and "horsepower" weren't needed to run *Lisp*, they were needed to run all the *other* stuff. Interlisp-D was a very nice system to use. It was forked from Interlisp, which ran on machines with less than a megabyte of memory, but lacked the IDE, network stack, &c. I never had the pleasure of using an MIT-family Lisp machine, although I worked with two people who had, so I can say it was the same issue there. They did *everything* in an *integrated* way in Lisp. Complaining that those machines needed "horsepower" is like complaining that Eclipse needs serious horsepower (my word, doesn't it just) and blaming the C code you are editing in it. Heck, there was even a Scheme system for the Apple ][.
OK, a University lab of cheap 8086 PCs does technically count as state-sponsored, but hardly "raw horsepower". I suppose the price of TI PC-Scheme (USD95) would have put it out of reach of hobbyists /sarc.
Nah, it simply wasn't available unless you knew it existed. (I certainly didn't know.)
No, that's not what "available" means. It was offered for sale. It was advertised. There was a published book about how to use it. It really was not at all hard to find if you looked.
I'm very sad to see FUD about Lisp surviving this long.
Enough of this kind of slur.
When you say that Lisp was not used because it required serious "horsepower", you say what is not true. Lisp did not require serious "horsepower". Some of the *applications* that people wanted to write in Lisp required serious "horsepower", but they would have required such no matter what the programming language, and the relevance of Lisp was simply that it made such applications *thinkable* in a way that most other languages did not. As for Python, if we define "PC days" to include 1988, when there were about a dozen Lisps running on 286s, so yes, Python began in late 1989 and 1.0 was released in 2004, so you're right. But I never said Python was eagerly adopted in the "PC days".
I have been refraining from responding in kind, because some unfriendly things could be said about your mindset.
My mindset is that SML is better than Lisp and Haskell is better than SML and there are languages pushing even further than Haskell in interesting directions. I stopped counting how many programming languages I had tried when I reached 200. Things have changed a lot. We now have compilers like SMLtoJs http://www.smlserver.org/smltojs/ so that we can compile statically typed mostly-functional code into dynamically typed strange code so that we can run it in a browser. All the old complaints about Lisp, and we end up with JavaScript. But that's OK because it has curly braces. (:-( We have massive amounts of software in the world these days, and as far as I can tell, all of it is more or less broken. Read about the formal verification of seL4. It is downright scary how many bugs there can be in such a small chunk of code. And it's interesting that the people who did it now believe that formal verification can be *cheaper* than testing, for a given target error rate. The great thing about Lisp in the old days was that it could massively reduce the amount of code you had to write, thus reducing the number of errors you made. The great thing about Haskell is that it pushes that even further, and the type system helps to catch errors early. And QuickCheck! What a feature! There are other things, like PVS and Coq, which push verification at compile time further than Haskell (I never did manage to use the PVS verifier, but there's a great book about Coq), and if it turns out that bog standard programmers can't cope with that, then hiring superstandard programmers able to produce closer-to-verified code may be well worth the price. I often struggle to follow the things the brilliant people on this list do with Haskell's type system, but I very much appreciate their aim of producing correct code.