This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v7 2/2] Mutex: Replace trylock by read only while spinning


On 07/06/2018 03:50 AM, Kemi Wang wrote:
> The pthread adaptive spin mutex spins on the lock for a while before
> calling into the kernel to block. But, in the current implementation of
> spinning, the spinners go straight back to LLL_MUTEX_TRYLOCK(cmpxchg) when
> the lock is contended, it is not a good idea on many targets as that will
> force expensive memory synchronization among processors and penalize other
> running threads. For example, it constantly floods the system with "read
> for ownership" requests, which are much more expensive to process than a
> single read. Thus, we only use MO read until we observe the lock to not be
> acquired anymore, as suggested by Andi Kleen.
> 
> Performance impact:
> It would bring some benefit in the scenarios with severe lock contention on
> many architectures (significant performance improvement is not expected),
> and the whole system performance can benefit from this modification because
> a number of unnecessary "read for ownership" requests which stress the
> cache system by broadcasting cache line invalidity are eliminated during
> spinning.
> 
> Meanwhile, it may have some tiny performance regression on the lock holder
> transformation for the case of lock acquisition via spinning gets, because
> the lock state is checked before acquiring the lock via trylock.
> 
> Similar mechanism has been implemented for pthread spin lock.

Why should I accept this patch?

You make a strong case about the cost of the expensive memory synchronization.

However, the numbers don't appear to back this up.

If the cost of the synchronization was high, when you add the spinning, why
doesn't it improve performance?

Do you need to do a whole system performance measurement?

As it stands it looks like this patch makes the general use case of 1-4
threads roughly 5% slower across a variety of workloads.

I'm not inclined to include this work unless there is some stronger
justification, or perhaps I have just misunderstood the numbers you
have provided.

-- 
Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]