
Peter Verswyvelen wrote:
I though it was impossible to detect a deadlock (and a black hole is something like a deadlock no?) *before* it occurred, but in this case, we're talking about detecting it *when* it occurs, no? And then raising an error instead of just blocking?
Generally, it's not possible to prevent a deadlock before it occurs; it's actually equivalent to the halting problem. When a deadlock is about to occur, it /is/ possible, is principle, to detect the condition and raise an error. What you would need to do is keep track, for each thread, of everything that thread may depend on. Whenever a thread tries to allocate a resource, you need to scan the dependency graph and determine whether or not the new allocation is safe. Sometimes you might raise a false alarm, though. Once a deadlock has /already/ occurred, it becomes easier to detect, but detection still depends on being able to determine everything a thread might depend on. Now in GHC, apparently, when a thread needs a value that hasn't been calculated yet, the thread blocks, waiting for that value to be calculated. If the Haskell code creates an infinite loop, then a thread may block waiting for itself. The example given is f = f. Well, before we ask for self-blocking detection, lets try a bigger loop like (f = g ; g = f) and suppose that the computations of f and g get assigned to different threads. Self-blocking detection won't help you here. You need some kind of loop-blocking detection. At this point, I would assume that as we take more things into account, the problem of detecting deadlock becomes increasingly complicated and the detection algorithms consume more and more resources. That's why many major operating systems don't have deadlock detection. On the other hand, perhaps deadlock detection is the wrong way to look at this. I think we are casting a net that's too big for the problem that we're really trying to solve. A deadlock happens whenever two (or more) threads are blocked on each other. Deadlocks can be extremely hard to detect, especially if they occur intermittently. On the other hand, we as programmers can detect, at design-time, that f = f is an algebraic loop. We know that some loops are just "tying the knot" and they're perfectly fine, while other loops involve self-dependency and evaluate to bottom. fib = 1 : 1 : zipWith (+) fib (tail fib) -- tying the knot foo = 1 : zipWith (+) foo (tail foo) -- algebraic loop Looking closer at foo, if we let x = (foo !! 1), then we have the equation x = 1 + x. The result is that x = bottom. To detect an algebraic loop, we would need to express a computation as a dependency graph and then look for any loops in the graph. As long as we are looking at a /pure/ computation, my intuition tells me that in many cases it should be possible, through static analysis, to build the dependency graph and check for any loops. Moreover, my intuition tells me that the static analysis involved in building these dependency graphs might already be available as part of an existing optimizer. Even if we don't detect an algebraic loop at compile-time, I think we could definitely detect it at run-time if we have enough computational resources. -- Ron