As usual, you give me much to ponder. For some reason it pleases that the world is not too concerned with what we happen to like.
But it's not true to what is *there*, and if you program for that model, you're going to get terrible performance.
I heard recently of a type system that captures the complexity of functions in their signatures. With that information available to the machine, perhaps the machine could be equipped with a way to plan an execution such that performance is optimised.
Your day with the HPC system sounds fascinating. Do you think that an Ada/Occam-like approach to partitioned distribution could tame the sort address space that you encountered on the day?
Any programming model that relies on large flat shared address spaces is out; message passing that copies stuff is going to be much easier to manage than passing a pointer to memory that might be powered off when you need it
But there'll still be some call for shared memory? Or maybe only for persistence stores?
One of the presenters was working with a million lines of Fortran, almost all of it written by other people. How do we make that safe?
Ultimately only proof can verify safety. (I'm trying to address something like that in my rewrite, which given the high quality of feedback from this list, I hope to post soon.)