47. Thread safety in the MPS¶
47.2. Overview¶
.over: The MPS is expected to run in an environment with multiple threads calling into the MPS. The initial approach is very simple. Some of the code is known to operate with exclusive access to the data it manipulates, so this code is safe. For the rest of the code, shared data structures are locked by the use of a single binary lock (design.mps.lock(0)) per arena. This lock is claimed on entry to the MPS and released on exit from it. So there is at most a single thread (per arena) running “inside” the MPS at a time.
47.3. Requirements¶
.req.mt: Code must work correctly in presence of multiple threads all calling into the MPS.
.req.perf: Performance should not be unreasonably hindered.
47.4. Architecture¶
.arch.arena: Arena Lock: no shared data between arenas.
.arch.global.binary: Global binary lock: protects mutable data shared between arenas – that is, the arena ring, see design.mps.arena.static.ring.lock.
.arch.global.recursive: Global recursive lock: protects static data which must be initialized once – for example, pool classes, see design.mps.protocol.impl.init-lock.
.arch.other: Other: data not shared.
.arch.static: Static data: sigs: shared-non-mutable always inited to same thing.
.arena-entry: Each arena has a single lock. Externally visible calls fall into two categories. Simple: arena lock not held. Lock is claimed on entry, and released on exit. Recall: These are callable only after a call-back from the MPS. In this case a arena lock is already held.
.interface: The definition of the interface should guarantee safe use of calls (from a locking point of view). For example, a buffer must be exclusive to a thread.
.buffers: The buffer code is designed not to need a lock in the fast case. A lock is only claimed on the exceptional reserve, trip and commit cases (fill and trip?). A buffer contains references to shared data (via pool field). Accessing this shared data must involve a lock.
.deadlock: A strict ordering is required between the global and arena locks to prevent deadlock. The binary global lock may not be claimed while either the arena or recursive global lock is held; the arena lock may not be claimed while the recursive global lock is held. Each arena lock is independent of all other arena locks; that is, a thread may not attempt to claim more than one arena lock at a time.
47.5. Analysis¶
.anal.simple: To have the code functioning correctly it should be easy to change correctly. So a simple approach is desirable. We have to also ensure that performance is not unreasonably downgraded.
47.5.1. Performance cost of locking¶
.lock-cost: The cost of locking in performance terms are:
.lock-cost.overhead: the overhead of claiming and releasing locks;
.lock-cost.pause: the pauses caused by one thread being blocked on another thread.
.lock-cost.wait: the time wasted by one thread being blocked on another thread.
.anal.perf.signif: .lock-cost.pause is significant if there are MPS functions that take a long time. Using more locks, e.g. having a lock per pool as well as a lock per arena, is a way of decreasing the locking conflict between threads (.lock-cost.pause and .lock-cost.wait). However this could increase .lock-cost.overhead significantly.
.anal.perf.work: But all MPS functions imply a small work-load unless a collection is taking place. In the case of a collection, in practice and certainly in the near future, all threads will most likely be suspended while the collection work is going on. (The pages being scanned will need to be unprotected which implies the mutator will have to be stopped.) We also have to remember that unless we are running on genuine multiprocessor .lock-cost.wait is irrelevant.
.anal.perf.alloc: During typical use we expect that it is allocation that is the most frequent activity. Allocation buffers (design.mps.buffer) are designed to allow allocation in concurrent threads without needing a lock. So the most significant time a thread spends in the MPS will be on a buffer-fill or during a collection. The next most significant use is likely to be buffer create and deletion, as a separate buffer will be required for each thread.
.anal.perf.lock: So overall the performance cost of locking is, I estimate, most significantly the overhead of calling the locking functions. Hence it would be undesirable from a performance point of view to have more than one lock.
47.5.2. Recursive vs binary locks¶
.anal.reentrance: The simplest way to lock the code safely is to define which code runs inside or outside the lock. Calling from the outside to the inside implies a lock has to be claimed. Returning means the lock has to be released. Control flow from outside to outside and from inside to inside needs no locking action. To implement this a function defined on the external interface needs to claim the lock on entry and release it on exit. Our code currently uses some external functions with the lock already held. There are two ways to implement this:
.recursive: Each external function claims a recursive lock.
simple;
have to worry about locking depth;
extra locking overhead on internal calls of external functions;
.binary: Each external function claims a binary lock. Replace each internal call of an external function with a call to a newly defined internal one.
more code
slightly easier to reason about
.anal.strategy: It seems that the .recursive strategy is the easiest to implement first, but could be evolved into a .binary strategy. (That evolution has now happened. tony 1999-08-31).
47.6. Ideas¶
.sol.arena-lock: Lock per arena which locks all MPS structures associated with the arena, except allocation buffers.
.sol.init: Shared static data may not be changed. It is initialised before being read, and if re-initalised the values written must be identical to those already there. Essentially only read-only shared static data is allowed.
.sol.fine-grain: Use finer grained locks, for example, a lock per per pool instance. Arena lock locks only operations on arena. Pool locks are claimed per pool. An ordering on pool instances would avoid deadlock.
.sol.global: Use global locks for genuinely global data which must be updated dynamically. An ordering between global and arena locks would avoid deadlock.
47.7. Implementation¶
Use MPS locks (design.mps.lock) to do locking.
47.7.1. Locking Functions¶
ArenaEnter()
and ArenaLeave()
are used to claim and release the
arena lock. To implement this:
There is a lock for every arena. The arena class init function allocates the lock as well as the arena itself.
ArenaInit()
callsLockInit()
on the lock and initializes the pointer to it from the arena.ArenaDestroy()
callsLockFinish()
on it.ArenaEnter()
claims the lock.ArenaLeave()
releases the lock.
47.7.3. Validation¶
We have to be careful about validation. Any function that is called from a arena-safe function without the arena-lock held, must itself be safe, or manipulating non-shared data.
For example, calling PoolIsValid()
before claiming the lock would be
wrong if PoolIsValid()
is unsafe. Defining it to be safe would
involve locking it, which if done in all similar situations would be
very expensive.
Possibly remove validation from accessor methods; replace with
signature check and IsValid()
calls in callers of accessor
functions.
Annotations?: - safe - non-shared - shared-non-mutable
47.7.4. Safe functions¶
Arena
ArenaCreate()
– no shared data; no lock; callsLockInit()
.ArenaDestroy()
– no shared data; no lock (should only finish arena after use); callsLockFinish()
.ArenaDescribe()
– lock.
Root (for the purposes of locking this module can be thought of as external)
RootCreate()
– calls createRootCreateTable()
– calls createcreate – lock
RootDestroy()
– lockRootDescribe()
– lock
will be attached to arena, can lock now.
Pool
PoolCreate()
/PoolCreateV()
– lock (Create calls CreateV which locks).PoolDestroy()
– lockPoolAlloc()
– lockPoolFree()
– lockPoolArena()
– accesses shared-non-mutable data onlyPoolDescribe()
– lock
Format
FormatCreate()
– lockFormatDestroy()
– lock
Buffer
BufferCreate()
– lockBufferDestroy()
– lockBufferFill()
– lockBufferTrip()
– lockBufferPool()
– accesses shared-non-mutable data onlyBufferDescribe()
– lockBufferCommit()
– “unsafe”: buffer may be used by single thread only. (but safe wrt arena)BufferReserve()
– “unsafe”: also
PoolClass (only shared data is static and non-mutable)
PoolClass()
PoolClassAMC()
PoolClassMV()
PoolClassMFS()
Sig (as with PoolClass
, relies on static data reinitialised to
constant value)
Collect
Collect()
– lock
Thread
ThreadRegister()
– lockThreadDeregister()
– lock