
On Sun, Mar 15, 2009 at 1:56 PM, Jonathan Cast
But not if you switch the (x <- ...) and (y <- ...) parts:
main = do r <- newIORef 0 v <- unsafeInterleaveIO $ do writeIORef r 1 return 1 y <- readIORef r x <- case f v of 0 -> return 0 n -> return (n - 1) print y
Now the IORef is read before the case has a chance to trigger the writing.
But if the compiler is free to do this itself, what guarantee do I have that it won't?
You don't really have any guarantee; the compiler is free to assume that v is a pure integer and that f is a pure function from integers to integers. Therefore, it can assume that the only observable affect of calling f v is non-termination. Note that unsafeInterleaveIO *breaks* this assumption; that is why it is unsafe. I erred previously in saying that this was allowed if f is total; it does still evaluate f v either way. But I can correct my argument as follows: the only observable effect from the (x <- ...) line is non-termination. And the compiler can prove that there *no* observable effect of "readIORef" until the value is used or the reference is written by another function. So it is free to make this reordering anyways, as the only observable effect could have been non-termination which will be observed immediately after. When you use unsafeInterleaveIO or unsafePerformIO, you are required prove that its use does not break these invariants; that, for example, you don't read or write from IORefs that could be accessed elsewhere in the program. These are proofs that the compiler can and does make in some situations; it can reorder sequential readIORef calls if it thinks one or the other might be more efficient. It can evaluate foldl as if it was foldl' if it proves the arguments strict enough that non-termination behavior is identical (ghc -O2 does this, for example). The language has them as "escape hatches" that allow you to write code that would not otherwise be possible, by shifting more of a proof obligation on to the programmer. -- ryan