
On Wednesday 12 November 2008 7:05:02 pm Jonathan Cast wrote:
I think the point is that randomIO is non-deterministic (technically, pseudo-random) but causal --- the result is completely determined by events that precede its completion. unsafeInterleaveIO, by contrast, is arguably (sometimes) deterministic --- but is, regardless, definitely not causal.
Sure, it's weird if you actually know how it's allegedly making its decisions (predicting the future, but not really). But all you can actually observe is that sometimes it gives (1,1), and sometimes it gives (2,1), and maybe it coincidentally always seems to give (1,1) to f, but to g, which is observationally equal to g, excepting bottoms, it always seems to give (2,1). That's a weird situation, but if you make things opaque enough, I think you can fall back on IO's nondeterminism. Another weird situation (it seems to me) goes like: foo = unsafeInterleaveIO $ fail "foo" do x <- foo putStrLn "bar" return $! x where you can't simply fall back to saying, "foo throws an exception," because it doesn't prevent "bar" from printing. So what is actually throwing an exception is some later IO action that doesn't have any relation to foo besides using a 'pure' result from it. However, I suppose you can push off such oddities in a similar way, saying that IO actions can throw delayed, asynchronous exceptions, and what do you know, it always happens to end up in the action that evaluated x. :) Of course, those sorts of explanations might be good enough to conclude that unsafeInterleaveIO doesn't break referential transparency, but they may make for a very poor operational semantics of IO and such. foo throws some delayed asynchronous exception? Well, that doesn't tell when or how. And it doesn't explain how weirdTuple 'nondeterministically' chooses between (1,1) and (2,1). So that would probably lead you to another semantics (a more operational one) that brings up how weirdTuple reads the future, or even that mutation happens as a result of evaluating pure values. But, the point is, I think, that such a semantics doesn't indicate that referential transparency is being violated (which would be a notion from another sort of semantics where there's no way to notice something weird is going on). Anyhow, I'm done rambling. (Hope it didn't go on too long; I know we weren't fundamentally in any kind of disagreement.) :) Cheers, -- Dan