This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] NUMA spinlock [BZ #23962]
- From: "马凌(彦军)" <ling dot ml at antfin dot com>
- To: Torvald Riegel <triegel at redhat dot com>, Ma Ling <ling dot ma dot program at gmail dot com>, <libc-alpha at sourceware dot org>
- Cc: <hongjiu dot lu at intel dot com>, Wei Xiao <wei3 dot xiao at intel dot com>
- Date: Tue, 15 Jan 2019 12:48:20 +0800
- Subject: Re: [PATCH] NUMA spinlock [BZ #23962]
- References: <20181226025019.38752-1-ling.ma@MacBook-Pro-8.local> <1c5330189c18781afec2db7b47158de2dea041b8.camel@redhat.com>
在 2019/1/15 上午7:26,“Torvald Riegel”<triegel@redhat.com> 写入:
On Wed, 2018-12-26 at 10:50 +0800, Ma Ling wrote:
> 2. Critical Section Integration (CSI)
> Essentially spinlock is similar to that one core complete critical
> sections one by one. So when contention happen, the serialized works
> are sent to the core who is the lock owner and responsible to execute
> them, that can save much time and power, because all shared data are
> located in private cache of the lock owner.
I agree that this can improve performance because of potentially both
increasing data locality for the critical sections themselves and
decreasing contention in the lock. However, this will mess with thread-
local storage and assumptions about what OS thread a critical section runs
on.
Ling: yes, we have to consider it when applying numa spinlock.
Maybe it's better to first experiment with this change in semantics in C++;
ISO C++ Study Group 1 on parallelism and concurrency is much deeper into
this topic than the glibc community is. This isn't really a typical lock
anymore when you do that, but rather a special kind of execution service
for small functions; the study group has talked about executors that
execute in guaranteed sequential fashion.
Ling: thanks for your suggestion, we would like to think about it seriously.