This is the mail archive of the
glibc-bugs@sourceware.org
mailing list for the glibc project.
[Bug nptl/23962] Very high overhead of pthread_spin_lock on multi-node NUMA systems
- From: "kemi.wang at intel dot com" <sourceware-bugzilla at sourceware dot org>
- To: glibc-bugs at sourceware dot org
- Date: Mon, 10 Dec 2018 08:27:33 +0000
- Subject: [Bug nptl/23962] Very high overhead of pthread_spin_lock on multi-node NUMA systems
- Auto-submitted: auto-generated
- References: <bug-23962-131@http.sourceware.org/bugzilla/>
https://sourceware.org/bugzilla/show_bug.cgi?id=23962
kemi <kemi.wang at intel dot com> changed:
What |Removed |Added
----------------------------------------------------------------------------
CC| |kemi.wang at intel dot com
--- Comment #3 from kemi <kemi.wang at intel dot com> ---
(In reply to Carlos O'Donell from comment #1)
> (In reply to H.J. Lu from comment #0)
> > Created attachment 11437 [details]
> > A workload with pthread_spin_lock
> >
> > This workload with pthread_spin_lock shows that:
>
> ... what is the solution? Implement MCS locks in userspace using per-cpu
> data?
We have tried MCS way with per-thread data, and can reduce spinlock overhead a
lot when heavily lock contention, but that will be one more pointer required in
pthread_spinlock_t which will break the existed ABI. I wonder if it's
acceptable?
Hongjiu has another way for the same purpose.
--
You are receiving this mail because:
You are on the CC list for the bug.