ANN: priority-sync-0.1.0.1: Cooperative task prioritization.

http://hackage.haskell.org/cgi-bin/hackage-scripts/package/priority-sync $ cabal install priority-sync git clone http://www.downstairspeople.org/git/priority-sync.git Feedback will be greatly appreciated. This package is a spin-off from my work on roguestar, where I need to do significant background processing while retaining enough resources to perform smooth animation. The following is the front-page documentation for the package. In a simple use case, we want to run some expensive tasks in prioritized order, so that only one task is running on each CPU (or hardware thread) at any time. For this simple case, four operations are needed: simpleTaskPool, schedule, claim, and startQueue. let expensiveTask = threadDelay 1000000 pool <- simpleTaskPool forkIO $ claim Acquire (schedule pool 1) $ putStrLn "Task 1 started . . ." >> expensiveTask >> putStrLn "Task 1 completed." forkIO $ claim Acquire (schedule pool 3) $ putStrLn "Task 3 started . . ." >> expensiveTask >> putStrLn "Task 3 completed." forkIO $ claim Acquire (schedule pool 2) $ putStrLn "Task 2 started . . ." >> expensiveTask >> putStrLn "Task 2 completed." threadDelay 100000 -- contrive to wait for all tasks to become enqueued putStrLn "Starting pool: " startQueue pool threadDelay 4000000 -- contrive to wait for all tasks to become dequeued A TaskPool combines Rooms and Queues in an efficient easy-to-use-interface. Rooms provide fully reentrant synchronization to any number of threads based on arbitrary resource constraints. For example, the Room from a simpleTaskPool is constrained by GHC.numCapabilities. Queues provide task prioritization. A Queue systematically examines (to a configurable depth) all waiting threads with their priorities and resource constraints and wakes the most eagerly prioritized thread whose constraints can be satisfied. TaskPools are not thread pools. The concept is similar to IO Completion Ports. There are no worker threads. If a number of threads are waiting, the thread that is most likely to be processed next is woken and temporarily serves as a working thread. Rooms, Queues, and TaskPools are backed by carefully written STM (software transactional memory) transactions. A salient feature is that, because any thread can participate, a TaskPool supports both bound threads and threads created with forkOnIO. Friendly, --Lane

Christopher Lane Hinson wrote:
http://hackage.haskell.org/cgi-bin/hackage-scripts/package/priority-sync
$ cabal install priority-sync
git clone http://www.downstairspeople.org/git/priority-sync.git
Feedback will be greatly appreciated. This package is a spin-off from my work on roguestar, where I need to do significant background processing while retaining enough resources to perform smooth animation.
The following is the front-page documentation for the package.
In a simple use case, we want to run some expensive tasks in prioritized order, so that only one task is running on each CPU (or hardware thread) at any time. For this simple case, four operations are needed: simpleTaskPool, schedule, claim, and startQueue.
let expensiveTask = threadDelay 1000000 pool <- simpleTaskPool forkIO $ claim Acquire (schedule pool 1) $ putStrLn "Task 1 started . . ." >> expensiveTask >> putStrLn "Task 1 completed." forkIO $ claim Acquire (schedule pool 3) $ putStrLn "Task 3 started . . ." >> expensiveTask >> putStrLn "Task 3 completed." forkIO $ claim Acquire (schedule pool 2) $ putStrLn "Task 2 started . . ." >> expensiveTask >> putStrLn "Task 2 completed." threadDelay 100000 -- contrive to wait for all tasks to become enqueued putStrLn "Starting pool: " startQueue pool threadDelay 4000000 -- contrive to wait for all tasks to become dequeued
A TaskPool combines Rooms and Queues in an efficient easy-to-use-interface.
Rooms provide fully reentrant synchronization to any number of threads based on arbitrary resource constraints. For example, the Room from a simpleTaskPool is constrained by GHC.numCapabilities.
Queues provide task prioritization. A Queue systematically examines (to a configurable depth) all waiting threads with their priorities and resource constraints and wakes the most eagerly prioritized thread whose constraints can be satisfied.
TaskPools are not thread pools. The concept is similar to IO Completion Ports. There are no worker threads. If a number of threads are waiting, the thread that is most likely to be processed next is woken and temporarily serves as a working thread.
Rooms, Queues, and TaskPools are backed by carefully written STM (software transactional memory) transactions.
A salient feature is that, because any thread can participate, a TaskPool supports both bound threads and threads created with forkOnIO.
Friendly, --Lane
Is 'claim' the only way to execute tasks? Lets say you create a task pool for 1 hardware thread. pool <- newTaskPool fast_queue_configuration 1 () If a task blocks/sleeps while holding a claim, none of the other tasks can run right? Is it possible to create a task pool for 2 hardware threads having one task dominate 1 CPU (render thread), and have other tasks multiplex IO on the other CPU __without stalling each other when blocked__? What happens if you release the room claim before blocking in IO? Does the thread schedule on a random CPU?

Is 'claim' the only way to execute tasks?
Lets say you create a task pool for 1 hardware thread.
pool <- newTaskPool fast_queue_configuration 1 ()
If a task blocks/sleeps while holding a claim, none of the other tasks can run right?
Correct, but you can wrap the blocking call inside claim Release {- . . . -} to release the claim temporarilty.
Is it possible to create a task pool for 2 hardware threads having one task dominate 1 CPU (render thread), and have other tasks multiplex IO on the other CPU __without stalling each other when blocked__? What happens if you release the room claim before blocking in IO? Does the thread schedule on a random CPU?
priority-sync only controls access to abstract Rooms that, in the motivating example, represent the resource limitation of the number of hardware threads. Tasks run inside their calling thread, on a CPU dictated by the RTS, so just use forkOnIO to create a thread that is locked to a specific capability if that's really what you want. priority-sync won't be aware that you're doing this. In your case, you may want to allow the rendering thread to run outside of the task pool, and create a room of size (max 1 $ numCapabilities-1) for all of the worker threads. Then wrap all relevant blocking calls as described above (if this is prohibitively combersome, please advise me). Another possibility, if the rendering thread is from something like a GLUT callback, is to have the rendering thread claim the Room using the Unconstrained context. Then you will likely end up with more threads claiming the room than the size of the Room, but the rendering thread will never be made to wait, and the Room will continue to block the worker threads until it has returned to within it's configured constraints. Friendly, --Lane
participants (2)
-
Christopher Lane Hinson
-
Neal Alexander