RE: Optimizations for mutable structures?

On 07 December 2005 15:21, Claus Reinke wrote:
(assuming 'acts' doesn't modify 'r').
this remark is the problem.
No, Jan's transformation is correct even in a multithreaded setting. It might eliminate some possible outcomes from a non-deterministic program, but that's ok. There's no requirement that all interleavings according to the semantics have to be implemented. ..
not implementing traces promised as possible by the semantics is not a good idea, imho, as programmers have some control over the set of traces themselves.
It's unreasonable to expect an implementation to provide *all* transitions specified by the semantics. How could you tell, for one thing? For example, what if the compiler doesn't allow a context switch between two adjacent operations: writeIORef r e x <- readIORef r this is a good example, in fact, because GHC will quite possibly compile this code into a single basic block that doesn't allow a context switch between the two statements. However, the semantics certainly allows a context switch between these two operations. Is GHC wrong? No way! So how do you specify which transitions an implementation must provide? Perhaps there's a good formulation, but I don't know what it is.
in this case, acts could implement the synchronisation with another thread working on r, ie., even if acts does not modify r itself, it might reliably cause another thread to modify r before acts can end. if such synchronisations depend on traces you have eliminated, the code would just block (without apparent reason), whereas in this case, r will have a value after acts that shouldn't have been possible, due to the explicit synchronisation code (again, happy debugging) ..
I should have said that if 'acts' blocks, then the transformation is invalid. When I say "acts doesn't modify r", I mean to include all ways of modifying r, including synchronising with another thread, or calling an unknown function.
of course, you could try to infer non-sequential interference with r resulting from act, but that is what Malcolm pointed out - you have to take the full semantics into account when doing such transformations (btw, java originally made a mess of this- hope that has been fixed by now).
I don't think so. Malcolm asserted that the transformation was invalid in a multi-threaded implementation; I disagree - it's just as valid in a multi-threaded implementation as a single-threaded one. You don't have to preserve non-deterministic interactions with other threads, because they're non-deterministic! Cheers, Simon

"Simon Marlow"
I should have said that if 'acts' blocks, then the transformation is invalid.
Well that is exactly what I was assuming when I said that the transformation is invalid. In the general case, for some arbitrary actions between the write and the read (excluding another write of course), there is no guarantee that the IORef remains unmodified. If you want to restrict the intermediate actions to be non-blocking, such that another thread cannot run, then that is an extra (and significant) proof obligation. And AFAICS your non-blocking argument only applies to co-operative scheduling. If pre-emption is permitted (e.g. interrupt-handling), then all bets are off, because an arbitrary thread could write to the IORef at any moment.
I don't think so. Malcolm asserted that the transformation was invalid in a multi-threaded implementation; I disagree - it's just as valid in a multi-threaded implementation as a single-threaded one.
I think what I said was correct in general. You need quite a lot of side-conditions to assert the opposite. Regards, Malcolm

The following paper seems relevant to this thread. Although it's written
in the context of C and C++, it's relevant to any language that combines
pre-emptive threads and imperative features.
http://www.hpl.hp.com/techreports/2004/HPL-2004-209.pdf
Tony.
--
f.a.n.finch

On Wed, Dec 07, 2005 at 04:45:52PM +0000, Tony Finch wrote:
The following paper seems relevant to this thread. Although it's written in the context of C and C++, it's relevant to any language that combines pre-emptive threads and imperative features.
I haven't read the whole paper yet, but I am not sure it applies to Concurrent Haskell. The paper is about introducing concurrency as a library to an imperative language. It seems to assume that the threads will communicate through ordinary variables modified in critical sections. But in Concurrent Haskell we don't modify ordinary variables (mutable variables don't exist in Haskell 98), but those supplied by the library. Citing the "Concurrent Haskell" paper: We give a semantics for Concurrent Haskell that is clearly stratified into a deterministic layer and a concurrency layer (Section 6). Existing reasoning techniques can be retained unmodified; for example, program transformations that preserve the correctness of a sequential Haskell program also preserve correctness of a Concurrent Haskell program. This is an unusual feature: more commonly, the non-determinism that arises from concurrency pervades the entire language. But can Concurrent Haskell be seen as a library? Best regards Tomasz -- I am searching for a programmer who is good at least in some of [Haskell, ML, C++, Linux, FreeBSD, math] for work in Warsaw, Poland

there seem to be two issues here - can we agree on that at least? 1) scheduling non-sequential programs on sequential processors i wasn't arguing that the scheduler should realise all possible interleavings at once. the issue i was referring to is known as "fairness" in concurrent systems literature. as with referential transparency, various non-equivalent definitions are in use, but one informal description might be something like: if, for a given experiment, some event is possible according to the semantics, it should happen eventually if the experiment is repeated often enough. see, eg, http://research.microsoft.com/users/lamport/pubs/pubs.html#lamport-fairness otherwise -if the event cannot happen no matter how often the experiment is repeated, one could hardly call it "possible"? scheduling is usually more difficult to implement with fairness guarantees than without, but reasoning about programs is more difficult without such guarantees. i wouldn't expect every concurrent haskell implementation to suceed in guaranteeing fairness, but i would expect the semantics to do so, and would consider any implementation failing in this to be incomplete (perhaps neccessarily so, for well-documented pragmatic reasons). 2) compiler optimisation validity in sequential and non-sequential environments the original motivation of this thread - are sequential transformations still valid in a non-sequential setting? in general, the answer is no, to the surprise/annoyance of many many compiler implementors who want their valuable optimisations to work even when their sequential language acquires threads. this is in fact so annoying that some languages, notably Java, have tried to specify the language semantics in such a way that some level of sequential optimisations would still be valid in spite of the language having threads - the search keyword here is "memory model". as I mentioned, they messed up the first time round, and have been through a lengthy re-design phase that involved *changes to the semantics*. I haven't followed that process - the original Java language spec of concurrency and memory model was sufficient to drive me away from the language for good (fingers crossed..). see, eg: http://www.cs.umd.edu/~pugh/java/memoryModel/ http://www.cs.umd.edu/~pugh/java/memoryModel/jsr-133-faq.html [concurrent haskell doesn't seem to have this kind of memory hierarchy, but when you're suggesting to transform single threads based on local knowledge of shared variables, you implicitly assume a hierarchy, and a weak memory model rather than a strong one - a memory model is one way of specifying the conditions under which reordering and other transformations remain valid in a non-sequential setting] i'd really, really prefer concurrent haskell not to go down a route in which demands of simpler implementation leads to subtle problems in reasoning about programs. cheers, claus
participants (5)
-
Claus Reinke
-
Malcolm Wallace
-
Simon Marlow
-
Tomasz Zielonka
-
Tony Finch