
Andrew Coppin
Achim Schneider wrote:
Andrew Coppin
wrote: I wonder what would happen if you instead had a vast number of very simple proto-processors connected in a vast network. [But I'm guessing the first thing that'll happen is that the data is never where you want it to be...]
You're not thinking of neuronal networks, are you? The interesting thing there is that they unite code and data.
Damn; you've seen through my cunning disguise. ;-)
In all seriousness, it's easy enough to build an artificial neural network that computes a continuous function of several continuous inputs. But it's much harder to see how, for example, you'd build a text parser or something. It's really not clear how you implement flow control with this kind of thing. It's so different to a Turing machine is appears to render most of current computer science irrelevant. And that's *a lot* of work to redo.
Hmmm... fuzzy logic, plus a lot of serialisation of parallel (that is, right now saved in linear ram) data. Don't make me think about it, or I'll be forced to confuse you with ramblings about how the brain works. (Is that control flow? ;)
Now, if you had a network of something a bit more complicated than artificial neurons, but less complicated than an actual CPU... you'd have... I don't know, maybe something useful? It's hard to say.
You'd have something like a cell processor, if you go for (more or less) normal control flow. Maybe we will soon see dedicated pointer rams, because hardware manufacturers despair while trying to design a cache manager for 1024 cores: That would make it easy to spot which core holds which pointer, and thus also easy to move the data to it. How would a Haskell compiler look like that targets a FPGA? That is, compiling down to configware, not to a RTS built on top of it. -- (c) this sig last receiving data processing entity. Inspect headers for past copyright information. All rights reserved. Unauthorised copying, hiring, renting, public performance and/or broadcasting of this signature prohibited.