
Bjorn Lisper
My current work includes [among other things] ways to eliminate this problem---that is, we may do a computation eagerly and defer or discard any errors.
What you basically have to do is to treat purely data-dependent errors (like division by zero, or indexing an array out of bounds) as values rather than events.
Indeed. We can have a class of deferred exception values similar to IEEE NaNs. [later]:
Beware that some decisions have to be taken regarding how error values should interact with bottom. (For instance, should we have error + bottom = error or error + bottom = bottom?) The choice affects which evaluation strategies will be possible.
Actually, as far as I can tell we have absolute freedom in this respect. What happens when you run the following little program? \begin{code} forever x = forever x bottomInt :: Int bottomInt = error "Evaluating bottom is naughty" + forever () main = print bottomInt \end{code} I don't know of anything in the Haskell language spec that forces us to choose whether to signal the error or diverge in this case (though it's clear we must do one or the other). Putting strong constraints on evaluation order would cripple a lot of the worker/wrapper-style optimizations that (eg) GHC users depend on for fast code. We want the freedom to demand strict arguments as early as possible; the consequence is we treat all bottoms equally, even if they exhibit different behavior in practice. This simplification is a price of "clean equational semantics", and one I'm more than willing to pay. -Jan-Willem Maessen jmaessen@mit.edu [PS - I've deliberately dodged the issue of program-level exception handling mechanisms and the like.]

From: Jan-Willem Maessen
Bjorn Lisper
replies to my reply: My current work includes [among other things] ways to eliminate this problem---that is, we may do a computation eagerly and defer or discard any errors.
What you basically have to do is to treat purely data-dependent errors (like division by zero, or indexing an array out of bounds) as values rather than events.
Indeed. We can have a class of deferred exception values similar to IEEE NaNs.
[later]:
Beware that some decisions have to be taken regarding how error values should interact with bottom. (For instance, should we have error + bottom = error or error + bottom = bottom?) The choice affects which evaluation strategies will be possible.
Actually, as far as I can tell we have absolute freedom in this respect. What happens when you run the following little program?
I don't think we have absolute freedom. Assuming we want \forall s : bottom \le s including s = error, then we should also have error \not\le bottom. For all other values s \not\equiv bottom we would want error \le s. . . . . . . . . . \ / \ / . . . | error | bottom Now if f is a strict function returning a strict function then (f bottom) error \equiv bottom error \equiv bottom and due to f's strictness either f error \equiv bottom or f error \equiv error. The former is as above. For the latter (assuming monotonicity) we have error \le 1 \implies f error \le f 1 and thus (f error) bottom \le (f 1) bottom \equiv bottom On the other hand, if error and other data values are incomparable. . . . . . error \ / \ / . . | bottom and you want, say, error + bottom \equiv error then + can no longer be strict in its second argument.... So I'd say error + bottom \equiv bottom and bottom + error \equiv bottom.
\begin{code} forever x = forever x
bottomInt :: Int bottomInt = error "Evaluating bottom is naughty" + forever ()
main = print bottomInt \end{code}
I don't know of anything in the Haskell language spec that forces us to choose whether to signal the error or diverge in this case (though it's clear we must do one or the other). Putting strong constraints on evaluation order would cripple a lot of the worker/wrapper-style optimizations that (eg) GHC users depend on for fast code. We want the freedom to demand strict arguments as early as possible; the consequence is we treat all bottoms equally, even if they exhibit different behavior in practice. This simplification is a price of "clean equational semantics", and one I'm more than willing to pay.
If error \equiv bottom and you extend, say, Int with NaNs, how do you implement arithmetic such that Infinity + Infinity \equiv Infinity and Infinity/Infinity \equiv Invalid Operation? Marko

On 26-Jan-2001, Marko Schuetz
I don't think we have absolute freedom. Assuming we want
\forall s : bottom \le s
including s = error, then we should also have error \not\le bottom.
You lost me here. Why should we have error \not\le bottom?
Why not just error \not\lt bottom?
--
Fergus Henderson

From: Fergus Henderson
On 26-Jan-2001, Marko Schuetz
wrote: I don't think we have absolute freedom. Assuming we want
\forall s : bottom \le s
including s = error, then we should also have error \not\le bottom.
You lost me here. Why should we have error \not\le bottom? Why not just error \not\lt bottom?
I assumed a semantic distinction between error and bottom was intended to accurately model the way the implementation would distinguish or defer the erroneous computation. You are right that without this assumption error \not\lt bottom would suffice. Marko
participants (3)
-
Fergus Henderson
-
Jan-Willem Maessen
-
Marko Schuetz