
there seem to be two issues here - can we agree on that at least? 1) scheduling non-sequential programs on sequential processors i wasn't arguing that the scheduler should realise all possible interleavings at once. the issue i was referring to is known as "fairness" in concurrent systems literature. as with referential transparency, various non-equivalent definitions are in use, but one informal description might be something like: if, for a given experiment, some event is possible according to the semantics, it should happen eventually if the experiment is repeated often enough. see, eg, http://research.microsoft.com/users/lamport/pubs/pubs.html#lamport-fairness otherwise -if the event cannot happen no matter how often the experiment is repeated, one could hardly call it "possible"? scheduling is usually more difficult to implement with fairness guarantees than without, but reasoning about programs is more difficult without such guarantees. i wouldn't expect every concurrent haskell implementation to suceed in guaranteeing fairness, but i would expect the semantics to do so, and would consider any implementation failing in this to be incomplete (perhaps neccessarily so, for well-documented pragmatic reasons). 2) compiler optimisation validity in sequential and non-sequential environments the original motivation of this thread - are sequential transformations still valid in a non-sequential setting? in general, the answer is no, to the surprise/annoyance of many many compiler implementors who want their valuable optimisations to work even when their sequential language acquires threads. this is in fact so annoying that some languages, notably Java, have tried to specify the language semantics in such a way that some level of sequential optimisations would still be valid in spite of the language having threads - the search keyword here is "memory model". as I mentioned, they messed up the first time round, and have been through a lengthy re-design phase that involved *changes to the semantics*. I haven't followed that process - the original Java language spec of concurrency and memory model was sufficient to drive me away from the language for good (fingers crossed..). see, eg: http://www.cs.umd.edu/~pugh/java/memoryModel/ http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html [concurrent haskell doesn't seem to have this kind of memory hierarchy, but when you're suggesting to transform single threads based on local knowledge of shared variables, you implicitly assume a hierarchy, and a weak memory model rather than a strong one - a memory model is one way of specifying the conditions under which reordering and other transformations remain valid in a non-sequential setting] i'd really, really prefer concurrent haskell not to go down a route in which demands of simpler implementation leads to subtle problems in reasoning about programs. cheers, claus