
either be slower than mainstream hardware or would be overtaken by it in a very short space of time.
i'd like to underline the last of these two points, and i'm impressed that you came to that conclusion as early as the eighties. i'm not into hardware research myself, but while i was working on my MSc, a couple of PhD students in the same group were developing an abstract machine and a hardware realisation (that must have been very early nineties?) for a reduction language (purely functional). unlike earlier designs, the hardware only leaned toward functional, rather than being specific to it (mostly RISC, with large register files organised as very fast stack windows for a small number of stacks), and numbers from the hand-configured prototype suggested that it would be about twice as fast as contemporary standard hardware. which was great, until it became clear that, in the time it would have taken to go from that prototype to production, the next generation of that standard hardware would have been on the market, also twice as fast (with the next next generation already on its way).. for a non-fp example, see the one-laptop-per-child project: now that they're actually looking for firm orders, they have the first mainstream competitors, and anything those do, they tend to do with a lot of backup, and twice as well a short time later.. the suggestion that the mainstream might be running out of steam along one particular dimension is interesting, but in my naive view, there is still the difference between any one-shot research project and a snapshot in a development pipeline of great momentum (are they still running several overlapping teams to double the flow through that pipeline?). i wouldn't want to dishearten anyone, though. just try not to aim for a single piece of hardware, but for a suitable change to one of those mainstream development pipelines, perhaps? claus