template<class Strand>
class basic_mutex;
using mutex = basic_mutex<boost::asio::io_context::strand>;
As in Boost.Fiber:
[…] synchronization objects can neither be moved nor copied. A synchronization object acts as a mutually-agreed rendezvous point between different fibers. If such an object were copied somewhere else, the new copy would have no consumers. If such an object were moved somewhere else, leaving the original instance in an unspecified state, existing consumers would behave strangely.
Member-functions
Constructor
basic_mutex(executor_type executor);
Constructs a new mutex.
Desstructor
~basic_mutex();
Destructs the mutex.
As in pthread_mutex_destroy(3)
:
Attempting to destroy a locked mutex or a mutex that is referenced (for example, while being used in a
pthread_cond_timedwait()
orpthread_cond_wait()
) by another thread results in undefined behavior.
get_executor()
executor_type get_executor() const;
Returns the strand associated with this mutex.
lock()
void lock(fiber::this_fiber this_fiber);
Locks the mutex. If another fiber has already locked the mutex, a call to
lock()
will suspend the current fiber until the lock is acquired.
Note
|
lock() is a suspension point, but it never throws
fiber_interrupted . It is not an interruption point.
|
Note
|
lock() uses dispatch semantic for better performance. If the lock can be
acquired without yielding to another fiber, so it’ll be done.
|
unlock()
void unlock();
Unlocks the mutex.
The mutex must be locked by the current fiber. Otherwise, the behavior is undefined.
Nested types
executor_type
using executor_type = Strand;
Post vs dispatch
Post semantics mean that the operation won’t start immediately. Instead it’ll always be scheduled through the underlying execution context. A simple illustration of this concept is the fiber constructor.
void start(std::vector<int>& coll)
{
for (auto e: coll) {
fiber([&]() { worker(coll, e); }).detach();
}
}
Were the function passed to fiber
constructor to execute immediately (dispatch
semantics), the iterators involved in the for-loop could be invalidated by some
of the workers mutating coll
. So post semantics guarantee a
“run-to-completion”
and tend to be adopted as the default execution policy because it tends to be
safer. Even if the iterator invalidation problem didn’t exist because you
decided to adopt a memory-safe and *-safe language, other subtle and
severe concurrency bugs such as starvation would still haunt you.
The problem is that dispatch semantic will run code somewhat arbitrary to the
function you’re coding before a suspension point is reached. It works virtually
as a new suspension point (whose pool of tasks from which the scheduler chooses
the new fiber is a special limited set). mutex::unlock()
isn’t a suspension
point and the user doesn’t expect it to run arbitrary code so it’d be unsafe for
mutex::unlock()
to use dispatch semantics by immediately executing the next
fiber pending on the mutex. On the other hand, mutex::lock()
already is a
suspension point, so it is not affected by this problem and dispatch semantics
are fine (the dispatch semantic in the mutex::lock()
case only mean that
mutex::lock()
might not yield thread control to another fiber while it
acquires the lock).
Post semantics also tend to be safer in regards to guaranteeing some level of
fairness around IO operations. This fairness problem isn’t as severe for the
mutex case and a synchronization primitive such as mutex should actually favour
performance so dispatch semantics fit it better (when safe, such as in the
mutex::lock()
case).