[PATCH v5 0/3] Optimize CAS [BZ #28537]
H.J. Lu
hjl.tools@gmail.com
Wed Nov 10 18:41:50 GMT 2021
Changes in v5:
1. Put back __glibc_unlikely in __lll_trylock and lll_cond_trylock.
2. Remove an atomic load in a CAS usage which has been already optimized.
3. Add an empty statement with a semicolon to a goto label for older
compiler versions.
4. Simplify CAS optimization.
CAS instruction is expensive. From the x86 CPU's point of view, getting
a cache line for writing is more expensive than reading. See Appendix
A.2 Spinlock in:
https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/xeon-lock-scaling-analysis-paper.pdf
The full compare and swap will grab the cache line exclusive and cause
excessive cache line bouncing.
Optimize CAS in low level locks and pthread_mutex_lock.c:
1. Do an atomic load and skip CAS if compare may fail to reduce cache
line bouncing on contended locks.
2. Replace atomic_compare_and_exchange_bool_acq with
atomic_compare_and_exchange_val_acq to avoid the extra load.
This is the first patch set to optimize CAS. I will submit the rest
CAS optimizations in glibc after this patch set has been accepted.
With all CAS optimizations applied, on a machine with 112 cores,
"make check -j28" under heavy load took
3093.18user 1644.12system 22:26.05elapsed 351%CPU
vs without CAS optimizations
3746.07user 1614.93system 22:02.91elapsed 405%CPU
H.J. Lu (3):
Reduce CAS in low level locks [BZ #28537]
Reduce CAS in __pthread_mutex_lock_full [BZ #28537]
Optimize CAS in __pthread_mutex_lock_full [BZ #28537]
nptl/lowlevellock.c | 12 ++++-----
nptl/pthread_mutex_lock.c | 49 +++++++++++++++++++++++--------------
sysdeps/nptl/lowlevellock.h | 33 +++++++++++++++++--------
3 files changed, 60 insertions(+), 34 deletions(-)
--
2.33.1
More information about the Libc-alpha
mailing list