This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2 1/3] Mutex: Accelerate lock acquisition by queuing spinner


Hi, Dear maintainers
  I hope this patchset can catch up 2.29 release cycle, could you help
to get it reviewed. If any question, I will resolve it ASAP. Thanks

On 2018/12/20 下午2:15, Kemi Wang wrote:
> Adaptive mutex indicates the semantic of spinning for a while before
> calling into the kernel to block. Thus, the lock of adaptive mutex could be
> acquired via immediate getting, spinning getting or waking up.
> 
> Currently, the spin-waiting algorithm of adaptive mutex is for each
> processor to repeatedly execute an test_and_set instruction until either
> the maximum spin count is reached or the  lock is acquired. However, the
> lock performance via spinning getting will degrade significantly as the
> number of spinning processors increases. Two factors at least cause this
> degradation[1]. First, in order to release the lock, the lock holder has to
> contend with spinning processors for exclusive access of lock cache line.
> (E.g. "lock; decl 0%" of lll_unlock() in
> sysdeps/unix/sysv/linux/x86_64/lowlevellock.h for pthread_mutex_unlock()).
> For most multiprocessor architectures, it has to wait behind those
> test_and_set instructions from spinning processors.
> Furthermore, on these invalidation-based cache coherency system,
> test_and_set instruction will trigger a "read-for-ownership" request for
> exclusive access to the lock cache line, it will potentially slow the
> access of other locations(shares the cache line with the lock at least) by
> the lock holder.
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]