This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] NUMA spinlock [BZ #23962]
- From: Torvald Riegel <triegel at redhat dot com>
- To: Ma Ling <ling dot ma dot program at gmail dot com>, libc-alpha at sourceware dot org
- Cc: hongjiu dot lu at intel dot com, "ling.ma" <ling dot ml at antfin dot com>, Wei Xiao <wei3 dot xiao at intel dot com>
- Date: Tue, 15 Jan 2019 00:26:30 +0100
- Subject: Re: [PATCH] NUMA spinlock [BZ #23962]
- References: <20181226025019.38752-1-ling.ma@MacBook-Pro-8.local>
On Wed, 2018-12-26 at 10:50 +0800, Ma Ling wrote:
> 2. Critical Section Integration (CSI)
> Essentially spinlock is similar to that one core complete critical
> sections one by one. So when contention happen, the serialized works
> are sent to the core who is the lock owner and responsible to execute
> them, that can save much time and power, because all shared data are
> located in private cache of the lock owner.
I agree that this can improve performance because of potentially both
increasing data locality for the critical sections themselves and
decreasing contention in the lock. However, this will mess with thread-
local storage and assumptions about what OS thread a critical section runs
on.
Maybe it's better to first experiment with this change in semantics in C++;
ISO C++ Study Group 1 on parallelism and concurrency is much deeper into
this topic than the glibc community is. This isn't really a typical lock
anymore when you do that, but rather a special kind of execution service
for small functions; the study group has talked about executors that
execute in guaranteed sequential fashion.