
Chris Kuklewicz wrote:
Aside on utterly useful proofs: When you write concurrent programs you want an API with strong and useful guarantees so you can avoid deadlocks and starvation and conflicting data read/writes. Designing an using such an API is a reasoning exercise identical to creating proofs. Some systems makes this possible and even easy (such as using STM).
I have to -- partly -- disagree with your last statement. It is possible that I missed something important but as I see it, STM has by construction the disadvantage that absence of starvation is quite hard (if not impossible) to prove. A memory transaction must be re-started from the beginning if it finds (when trying to commit) another task has modified one of the TVars it depends on. This might mean that a long transaction may in fact /never/ commit if it gets interrupted often enough by short transactions which modify one of the TVars the longer transaction depends upon. IIRC this problem is mentioned in the original STM paper only in passing ("Starvation is surely a possibility" or some such comment). I would be very interested to hear if there has been any investigation of the consequences with regard to proving progress guarantees for programs that use STM. My current take on this is that there might be possibilities avoid this kind of starvation by appropriate scheduling. One idea is to assign time slices dynamically, e.g. to double it whenever a task must do a rollback instead of a commit (due to externally modified TVars), and to reset the time slice to the default on (successful) commit or exception. Another possibility is to introduce dynamic task priorities and to increase priority on each unsuccessful commit. (Unfortunately I have no proof that such measures would remove the danger of starvation, they merely seem plausible avenues for further research. I also don't know whether the current GHC runtime system already implements such heuristics.) Cheers Ben