Objections to runAtomically

George Russell ger at tzi.de
Thu Oct 17 17:43:29 EDT 2002


George Russell wrote:
> 
> Alastair Reid wrote:
> [snip]
> > It assumes that side effects can be linearized.  Every theory of
> > concurrency I know of makes the same assumption.
> [snip]
> I don't know if the Java Language Specification counts as a "theory of
> concurrency" but you will find if you check that side effects are *not*
> linearised in that language.  This is also true of the Java Virtual
> Machine.  Thus it would be a problem if, say, someone were to compile
> Haskell to JVM.
Having said that, of course JVM's do permit global locking, so you could
implement runAtomically.  The effect would be that while you wouldn't
have a single global ordering of side effects, runAtomically would appear
to work in any of the linear orderings as observed by the separate
threads (or something like that, I'm not digging up the fiendishly
complicated chapter 7 or 8 or whatever it is of the JVM specification
at this time of night).  I suppose I have to concede that runAtomically
is at least implementable.  I still don't think it belongs in the FFI
specification though; as Simon M says we seem to be able to do everything
anyone can think of using atomicModifyIORef.  It's not that I don't
think locking primitives are a good idea, but runAtomically seems to
be an awfully crude one.  It also incidentally would trash any
functions which implicitly rely on, say, worker threads.  For example,
some optimisation algorithm could repeatedly try to improve on the 
best possible solution until interrupted by an Exception from a worker
thread indicating that time was up.  Of course you couldn't write such a
function in NHC, but in GHC such a technique is potentially useful,
and you wouldn't necessarily want to have to tell the user that the
function could not be called from inside runAtomically.

Could we not at least replace the global lock by a local one?
For example
   newLock :: IO Lock
   synchronize :: Lock -> IO a -> IO a
would specify a Java-like mechanism, by which synchronize would delay
the action if necessary so that no two actions were simultaneously
performed on the same lock.  The big question would be, can this be
implemented with Haskell finalizers in NHC?  I think for NHC you would
simply implement Alastair's runAtomically, and wrap synchronize in it.
Then you would know that if in the middle of "synchronize lock action",
there was another attempt to synchronize on "lock", the lock could not
be due to a finalizer running inside action (since new finalizers are
blocked and finalizers are properly nested), so must be due to action
itself, so we must deadlock.  Thus for NHC this is mildly more complicated
(and inefficient) than having a global runAtomically.  For Hugs/GHC
we can easily implement a Lock as an MVar ().  The only disadvantage I
can see is that for NHC you are delaying new finalizers indefinitely
during the duration of the synchronize'd action, but of course this is
equally true of runAtomically in NHC.  Where this solution wins is that
you don't have to delay anything in Hugs or GHC except where the user
explicitly requests it.



More information about the FFI mailing list