Re: controlling resource usage

On Fri, 7 Sep 2001, Richard Uhtenwoldt wrote:
Finally, are we all in agreement that it would be a great thing if someone invented a Haskell-like language as good as Haskell at hiding resource usage when resource usage is unimportant, but that made it easier to control resource usage than Haskell does when resource usage is important?
Of course it's not just in functional languages that performance demands
can require rewriting shorter, more comprehensible code in less obvious
ways. (My favourite example of something so completely non-intuitive is
the Intel example from within a Sieve of Eratosthenes program that
for(i=0;i

due to cache coherency issues.) In that context, there was an old suggestion of Don Knuth's that program source code should in the idealized future not just contain the final, optimized code (with perhaps the original code in comments) but as a kind of set of layers each representing the same semantic program but with increasing orders of optimisation, and links between `things' annotated with a label placed by the programmer explaining why the transformation between an unoptimised version `thing' and an optimised `thing'.
[snip]
I suspect that such a system is still at least 25 years away from feasiblity though :-(
That seems to me to be an overestimate. One of the things I've had at the back of my mind since just after I faded from view (as it were), was a kind of formalised hand compilation. The background is that an automatic programme transformation system never knows as much about the problem as the programmer, so it has little chance of making the right decisions. In addition, of course, the prerequisites for certain transformations are undecidable. As a result, although research has been done on automatic programme transformation for a long time, people still rewrite programmes by hand. The same applies to ordinary compilation (after all, compilation _is_ automatic programme transformation): strictness analysis is fine, but if your programme needs strictness and the compiler can't spot it, you have to annotate by hand anyway. So, roughtly speaking, the idea is that a programme would consist of two objects, the unoptimised source code, and an optimising compiler written specifically for that programme. Obviously we'd provide default_compile, so that to begin with one would need do nothing special. Then we'd want a way of breaking down the compilation into individual functions, for which we could specify different compilation strategies. For example, if f [] = ... f x = h x ++ f (g x) we might specify compile f = default_compile (eliminate_append f) (Not sure what the syntax should be, or how to specify to which function a new rule applies) At the back of my mind is something like the way proofs are constructed in HOL, though I'm thinking of a system where the ordinary programmer doesn't write new optimisations, just gets to select which ones are used on which bits of the programme. The optimisations can be fairly dull and complain if the function to which they are applied doesn't match their pattern. Producing new optimisations would be a task for a guru, probably armed with a proof assistant. Not that there's any law that some programmers can't also be such gurus, of course. I'm certainly not in a fit state to have a go at this myself, but I'm sure there's lots of stuff from the programme transformation and proof assistant worlds that could be applied here, but with the advantage that it doesn't have to work automatically. There's also scope for permitting the application of transformations that may be only partially correct -- we have that with $! already. I'd want the compilation and optimisation stuff to be kept in a separate file from the programme itself, but would ultimately hope for tools that allowed the two to be manipulated together. I tried mentioning this to a beginning PhD student a number of years ago, but somehow it didn't excite him. Maybe someone out there has a vision th t more closely matches mine? Jón -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk 31 Chalmers Road jf@cl.cam.ac.uk Cambridge CB1 3SZ +44 1223 570179 (after 14:00 only, please!)

I wrote:
due to cache coherency issues.) . . .
Whoops. I should have credited the quoted part to D Tweed. Sorry. I'm a bit woozy right now. -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk 31 Chalmers Road jf@cl.cam.ac.uk Cambridge CB1 3SZ +44 1223 570179 (after 14:00 only, please!)

On Fri, 7 Sep 2001, Jon Fairbairn wrote:
due to cache coherency issues.) In that context, there was an old suggestion of Don Knuth's that program source code should in the idealized future not just contain the final, optimized code (with perhaps the original code in comments) but as a kind of set of layers each representing the same semantic program but with increasing orders of optimisation, and links between `things' annotated with a label placed by the programmer explaining why the transformation between an unoptimised version `thing' and an optimised `thing'.
[snip]
I suspect that such a system is still at least 25 years away from feasiblity though :-(
That seems to me to be an overestimate.
You might be right; I just have nasty 5 year old memories (that my subconscious hasn't succeeded in suppressing :-) ) of trying to coax OBJ3 to perform proofs on very simple things and finding it so difficult that I can't imagine it being usable by a programmer with no intrinsic interest in the logic process without some extreme advances (which may just be increasing processor capacity so limited application of `brute force' approaches become more feasible). In some ways I suspect that Knuth's idea may be simpler to implement than yours in that there all the creativity and optimization is performed by the programmer; the computer just (very reliably) ensures equivalence between the two versions of the program and (with vague reliability) tries to alter the optimised program as small changes to the original are made to preserve semantics but not necessarily efficiency. But I'll certainly be keeping an eye out in the future for a system of either kind that I can use in my programming. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

"D. Tweed" wrote:
On Fri, 7 Sep 2001, Jon Fairbairn wrote:
due to cache coherency issues.) In that context, there was an old suggestion of Don Knuth's that program source code should in the idealized future not just contain the final, optimized code (with perhaps the original code in comments) but as a kind of set of layers each representing the same semantic program but with increasing orders of optimisation, and links between `things' annotated with a label placed by the programmer explaining why the transformation between an unoptimised version `thing' and an optimised `thing'.
[snip]
I suspect that such a system is still at least 25 years away from feasiblity though :-(
That seems to me to be an overestimate.
You might be right; I just have nasty 5 year old memories (that my subconscious hasn't succeeded in suppressing :-) ) of trying to coax OBJ3 to perform proofs on very simple things and finding it so difficult that I can't imagine it being usable by a programmer with no intrinsic interest in the logic process without some extreme advances (which may just be increasing processor capacity so limited application of `brute force' approaches become more feasible). In some ways I suspect that Knuth's idea may be simpler to implement than yours in that there all the creativity and optimization is performed by the programmer; the computer just (very reliably) ensures equivalence between the two versions of the program and (with vague reliability) tries to alter the optimised program as small changes to the original are made to preserve semantics but not necessarily efficiency.
But I'll certainly be keeping an eye out in the future for a system of either kind that I can use in my programming.
Kestrel's Specware system aims to be just that. One begins with abstract specifications in higher-order logic. You then apply semantics-preserving refinements to yield concrete code in an executable subset of the specification language (a pure functional language). Like OBJ3, Specware is coupled to a theorem prover to discharge proof obligations that arise along the way. As you point out, expecting the machine to be creative is optimistic, so instead, program synthesis is driven by the user selecting and instantiating abstract algorithm schemes (eg divide and conquer) from a taxonomy. A modest amount of category theory ties everything together. It provides the basis for refinement, parameterization and the structuring and composition of design knowledge. You'll find more information at: http://www.kestrel.edu/ but note that we are in the process of updating our web pages. You've mentioned an idea due to Knuth. I would be grateful if you could send a reference to it. Best wishes, Lindsay

Sounds to me like your describing Sittampalam's and de Moor's MAG system. http://web.comlab.ox.ac.uk/oucl/research/areas/progtools/mag.htm John -- Jon Fairbairn wrote:
The background is that an automatic programme transformation system never knows as much about the problem as the programmer, so it has little chance of making the right decisions. In addition, of course, the prerequisites for certain transformations are undecidable. As a result, although research has been done on automatic programme transformation for a long time, people still rewrite programmes by hand. The same applies to ordinary compilation (after all, compilation _is_ automatic programme transformation): strictness analysis is fine, but if your programme needs strictness and the compiler can't spot it, you have to annotate by hand anyway.
So, roughtly speaking, the idea is that a programme would consist of two objects, the unoptimised source code, and an optimising compiler written specifically for that programme. Obviously we'd provide default_compile, so that to begin with one would need do nothing special. <more description clipped>

You've mentioned an idea due to Knuth. I would be grateful if you could send a reference to it.
I came across it in one of the papers in Knuth's book Literate Programming (can't be more precise from memory). I'm off to a conference but will look it up and send you a more exact reference at start of next week. John Atwood's reference to the MAG system lead me to its interesting homepage, which I hadn't known about before. I'll have to take a look at that when I get back. What I consider the key issue for such a system is to be incremental, i.e., the work required upon changing a small part of the basic program should be proportional to the size of that change and not to the complexity of the entire program. ___cheers,_dave________________________________________________________ www.cs.bris.ac.uk/~tweed/pi.htm |tweed's law: however many computers email: tweed@cs.bris.ac.uk | you have, half your time is spent work tel: (0117) 954-5250 | waiting for compilations to finish.

Sounds to me like your describing Sittampalam's and de Moor's MAG system. http://web.comlab.ox.ac.uk/oucl/research/areas/progtools/mag.htm
Yes! That's very close to what I was thinking of. The only difference is that they focus on source to source transformations, where I was thinking of making the whole compilation (after type-checking) available for modification, in particular being able to play with store usage would be useful. Jón -- Jón Fairbairn Jon.Fairbairn@cl.cam.ac.uk 31 Chalmers Road jf@cl.cam.ac.uk Cambridge CB1 3SZ +44 1223 570179 (after 14:00 only, please!)
participants (4)
-
D. Tweed
-
John Atwood
-
Jon Fairbairn
-
Lindsay Errington