
"Neil Davies"
Bulat
That was done to death as well in the '80s - data flow architectures where the execution was data-availability driven. The issue becomes one of getting the most of the available silicon area. Unfortunately with very small amounts of computation per work unit you: a) spend a lot of time/area making the matching decision - the what to do next b) the basic sequential blocks of code are too small - can't efficiently pipeline
Locality is essential for performance. It is needed to hide all the (relatively large) latencies in fetching things.
If anyone wants to build the new style of functional programming execution hardware, it is those issues that need to be sorted.
Yes indeed, though a lot has changed (and a lot stayed the same) in hardware since then. There may be greater possibilities for integrating garbage collection in the memory, for example, and there's always the possibility that someone will come up with a new and radically different way of spreading a functional programme across multiple CPU cores. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk