
There is also the problem of error behavior; if we unfold an "or" eagerly to exploit its data parallelism we may uncover errors, even if it's finite:
or [True, error "This should never be evaluated"] True
My current work includes [among other things] ways to eliminate this problem---that is, we may do a computation eagerly and defer or discard any errors.
What you basically have to do is to treat purely data-dependent errors (like division by zero, or indexing an array out of bounds) as values rather than events. Then you can decide whether to raise the error or discard it, depending on whether the error value turned out to be needed or not. You will have to extend the operators of the language to deal also with error values. Basically, the error values should have algebraic properties similar to bottom (so strict functions return error given an error as argument). Beware that some decisions have to be taken regarding how error values should interact with bottom. (For instance, should we have error + bottom = error or error + bottom = bottom?) The choice affects which evaluation strategies will be possible. Björn Lisper