
On Mon, May 17, 2010 at 19:41, David Leimbach
Is there not a way to multiplex the signal handlers into one thread, and then dispatch new threads to do the work when events that require such concurrency occur? That would be the initial way I'd structure such a program.
All signals, method calls, method returns, etc are read from the socket in a single thread. If some computation needs to be performed, a thread is spawned. There's no "thread pool", and I doubt such a construct would provide any benefit.
In fact, if the results of a computation based on a signal aren't even needed immediately, one could rely on the fact that Haskell is a non-strict language to partially evaluate an expression and get to it later when it's really needed. Haskell has "built in Futures" of a sort. That may not be appropriate depending on the processing at hand, but it's worth noting that it's possible.
I'm not sure what partial evaluation has to do with anything under discussion. If received messages are processed in the same thread, then some long-running computation would block that thread (and, hence, message reception). This occurs regardless of exactly when the computation is performed, relative to its declaration. Removing the fork from within the signal/method dispatcher would simply force every user to write "forkIO $ ..." everywhere.