
On Mar 12, 2008, at 11:23 PM, Luke Palmer wrote:
The issue is that exception handling semantics do induce an order of evaluation for determinacy: if both functions in a composition throw an exception at some point (say in the 3rd element of a list they're generating), you need to decide which exception ends up being thrown, and to do that based on the lazy evaluation order would break referential transparency.
OK, that is what I was getting several years ago when I brought this up, but ... check my reasoning on this. Maybe a contrived example of what you're talking about, if I write f _ [] = [] f n (a:x) = (div 3 n, j a):(f (n - 1) x) where j (Just v) = v main = print $ f 2 [Just 1, Just 2, Nothing] Running it, I'll hear about a divide by zero on the 3rd element, and not hear about the Just match failure on the 3rd element. Suppose you model exceptions with an data type X that implicitly describes all expressions in Haskell: data X a = Good a | Bad String such that a partial explicit expansion of the above is f _ [] = [] f n (a:x) = (div 3 n, j a):(f (n - 1) x) where j (Just v) = Good v j Nothing = Bad "X.hs:15:0-27: Non-exhaustive patterns in function j" div a 0 = Bad "divide by zero" div a b = Good (div a b) The expanded result of (f 2 [Just 1, Just 2, Nothing]) will be [(Good 1, Good 1), (Good 3, Good 2), (Bad "divide by zero", Bad "X.hs:15:0-27: Non-exhaustive patterns in function j")] My IO action (print) had to decide to throw "divide by zero", but arbitrary decisions like that are common and don't make the underlying principle unsound. Am I missing something, or is that a suitable model for pure, non-strict exceptions that could in principle be caught outside IO? (Of course, I do _not_ propose to write code per the expanded example above - that's only a partial expansion, and already too cumbersome to be useful. It's only a conceptual model.) Donn Cave, donn@avvanta.com