[Haskell-cafe] Re: Re[5]: Parallel combinator, performance advice

Neil Mitchell ndmitchell at gmail.com
Tue Apr 7 12:36:33 EDT 2009


Hi

> par is likely to spark all the computations, and then switch between
> them - which will mean I've got more than N things running in
> parallel.

| par/GHC RTS limits amount of Haskell threads running simultaneously.
| with a system call marked as safe, Capability will be freed while we
| execute external program so nothing will be limited except for amount
| of tasks *starting* (as opposite to running) simultaneously :)))

Yeah, I misspoke - I want to avoid starting N things.

>>> parallel_ (x1:xs) = do
>>>     sem <- newQSem $ 1 - length xs
>>>     forM_ xs $ \x ->
>>>         writeChan queue (x >> signalQSem sem, False)
>>>     x1
>>>     addWorker
>>>     waitQSem sem
>>>     writeChan queue (signalQSem sem, True)
>>>     waitQSem sem
>
>> Neil, executing x1 directly in parallel_ is incorrect idea.

It's a very slight optimisation, as it saves us queueing and
dequeueing x1, since we know the worker we're about the spawn on the
line below will grab x1 immediately.

> forget this. but it still a bit suboptimal: after everything was
> finished, we schedule one more empty job and wait while some worker
> thread will pick up it. it will go into Chan after all jobs scheduled
> at the time our jobs was executed so that we are doing here is
> eventually don't do any internal activity while we have all N external
> programs running
>
> instead, my solution packed this flag together with last job so once
> last job is finished we are immediately returned from parallel_ so
> other internal activity may go on

There is no guarantee that the last job finishes last. If the first
job takes longer than the last job we'll be one thread short while
waiting on the first job. It's a shame, since removing that additional
writeChan isn't particularly useful.

Thanks

Neil


More information about the Haskell-Cafe mailing list