This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug nptl/23962] Very high overhead of pthread_spin_lock on multi-node NUMA systems


https://sourceware.org/bugzilla/show_bug.cgi?id=23962

kemi <kemi.wang at intel dot com> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
                 CC|                            |kemi.wang at intel dot com

--- Comment #3 from kemi <kemi.wang at intel dot com> ---
(In reply to Carlos O'Donell from comment #1)
> (In reply to H.J. Lu from comment #0)
> > Created attachment 11437 [details]
> > A workload with pthread_spin_lock
> > 
> > This workload with pthread_spin_lock shows that:
> 
> ... what is the solution? Implement MCS locks in userspace using per-cpu
> data?

We have tried MCS way with per-thread data, and can reduce spinlock overhead a
lot when heavily lock contention, but that will be one more pointer required in
pthread_spinlock_t which will break the existed ABI. I wonder if it's
acceptable?

Hongjiu has another way for the same purpose.

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]