patch applied (ghc): Work stealing for sparks
simonmarhaskell at gmail.com
Thu Oct 23 04:58:02 EDT 2008
Mon Sep 15 06:28:46 PDT 2008 berthold at mathematik.uni-marburg.de
* Work stealing for sparks
Spark stealing support for PARALLEL_HASKELL and THREADED_RTS versions of the RTS.
Spark pools are per capability, separately allocated and held in the Capability
structure. The implementation uses Double-Ended Queues (deque) and cas-protected
The write end of the queue (position bottom) can only be used with
mutual exclusion, i.e. by exactly one caller at a time.
Multiple readers can steal()/findSpark() from the read end
(position top), and are synchronised without a lock, based on a cas
of the top position. One reader wins, the others return NULL for a
Work stealing is called when Capabilities find no other work (inside yieldCapability),
and tries all capabilities 0..n-1 twice, unless a theft succeeds.
Inside schedulePushWork, all considered cap.s (those which were idle and could
be grabbed) are woken up. Future versions should wake up capabilities immediately when
putting a new spark in the local pool, from newSpark().
Patch has been re-recorded due to conflicting bugfixes in the sparks.c, also fixing a
(strange) conflict in the scheduler.
M ./includes/Regs.h -95 +1
M ./includes/RtsTypes.h +34
M ./rts/Capability.c -3 +54
M ./rts/Capability.h +4
M ./rts/Schedule.c -37 +73
M ./rts/Sparks.c -108 +409
M ./rts/Sparks.h -27 +42
View patch online:
More information about the Cvs-ghc