This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: 转发:[PATCH] NUMA spinlock [BZ #23962]


On 03/01/2019 05:35, 马凌(彦军) wrote:
>      create mode 100644 manual/examples/numa-spinlock.c
>      create mode 100644 sysdeps/unix/sysv/linux/numa-spinlock-private.h
>      create mode 100644 sysdeps/unix/sysv/linux/numa-spinlock.c
>      create mode 100644 sysdeps/unix/sysv/linux/numa-spinlock.h
>      create mode 100644 sysdeps/unix/sysv/linux/numa_spinlock_alloc.c
>      create mode 100644 sysdeps/unix/sysv/linux/x86/tst-numa-variable-overhead.c
>      create mode 100644 sysdeps/unix/sysv/linux/x86/tst-variable-overhead-skeleton.c
>      create mode 100644 sysdeps/unix/sysv/linux/x86/tst-variable-overhead.c

as far as i can tell the new code is generic
(other than the presence of efficient getcpu),
so i think the test should be generic too.

>     --- /dev/null
>     +++ b/sysdeps/unix/sysv/linux/x86/tst-variable-overhead-skeleton.c
>     @@ -0,0 +1,384 @@
...
>     +/* Check spinlock overhead with large number threads.  Critical region is
>     +   very smmall.  Critical region + spinlock overhead aren't noticeable
>     +   when number of threads is small.  When thread number increases,
>     +   spinlock overhead become the bottleneck.  It shows up in wall time
>     +   of thread execution.  */

yeah, this is not easy to do in a generic way, i think
even on x86 such measurement is problematic, you don't
know what goes on a system (or vm) when the glibc test
is running.

but doing precise timing is not that important for
checking the correctness of the locks, so i think a
simplified version can be generic test code.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]