
Maybe this is a different topic, but exploring concurrency in Haskell is definitely on my "to do" list, but this is really a bit of a puzzle. One thing I've been thinking lately is that in functional programming the process is really the wrong abstraction (computation is reduction, not a sequence of steps performed in temporal order). But what is concurrency if their are no processes to "run" concurrently? I've beren thinking about action systems and non-determinism, but am unsure how the pieces really fit together.
Concurrency in Haskell is handled by forking the execution of IO actions, which are indeed sequences of steps to be performed in a temporal order. There are some elegant constructions in that direction, not least of which is STM, a system for transactional thread communication, on top of which one can implement channels and various other concurrency abstractions. STM allows one to insist that certain restricted kinds of actions relating to thread communication (in the STM monad) occur atomically with respect to other threads. These transactions can create, read and write to a special kind of mutable variable called a TVar (transactional variable). They can also ask to block the thread they are in and be retried later when one of the TVars they observed changes. There is additionally an operator `orElse` where if a and b are STM transactions, then (a `orElse` b) is a transaction where if a is executed and if it retries, then b is attempted, and if it retries, then the whole transaction retries. The first transaction not to retry gets to return its value. The operator `orElse` is associative, and has retry as an identity. The paper describing STM in more detail is here: http://research.microsoft.com/Users/simonpj/papers/stm/index.htm Perhaps more related to what you were thinking about, with Parallel Haskell, there's a mechanism for parallel computation to be used in a similar fashion to seq called par. The expression x `par` (y `seq`(x + y)), when evaluated, will first spark a parallel process for evaluating x (up to the top level data constructor), while in the main task, y `seq` (x + y) is computed, so the evaluation of y proceeds, and the task potentially blocks until x finishes being computed, and then the sum is returned. Sparking x will create the potential for x to be computed in another thread on a separate processor, if one becomes idle, but if that doesn't happen, x will simply be computed on the same processor as y when it is needed for the sum. Hope this is somewhat interesting and perhaps somewhat answers your question :) - Cale