This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug nptl/23962] Very high overhead of pthread_spin_lock on multi-node NUMA systems


https://sourceware.org/bugzilla/show_bug.cgi?id=23962

--- Comment #5 from kemi <kemi.wang at intel dot com> ---
(In reply to Carlos O'Donell from comment #4)
> (In reply to kemi from comment #3)
> > (In reply to Carlos O'Donell from comment #1)
> > > (In reply to H.J. Lu from comment #0)
> > > > Created attachment 11437 [details]
> > > > A workload with pthread_spin_lock
> > > > 
> > > > This workload with pthread_spin_lock shows that:
> > > 
> > > ... what is the solution? Implement MCS locks in userspace using per-cpu
> > > data?
> > 
> > We have tried MCS way with per-thread data, and can reduce spinlock overhead
> > a lot when heavily lock contention, but that will be one more pointer
> > required in pthread_spinlock_t which will break the existed ABI. I wonder if
> > it's acceptable?
> > 
> > Hongjiu has another way for the same purpose.
> 
> We can't break the existing ABI. However, we can restructure the entire
> spinlock internally to try make better use of the existing size. I'm looking
> forward to seeing what patches are posted for this.

Agree!
We are trying some other way similar as qspinlock mechanism.
But, the challenge is that I didn't find a nice way to get the tail node info
when a new spinner is added in the queue. I wonder if glibc has similar
mechanism like per-cpu variable of kernel.
Any suggestion? Thanks

Some Pseudo-code:
struct pthread_spinlock_t
{
  union {
      struct {
               int locked:8; //bit lock
               int tid:24; // thread-specific info
       };
  int lock;
  };
}

struct qnode {
    struct qnode *next;
    u8   wait;     // local waiting flag
}

static __thread struct qnode;

// get tail node info
tid=GET_LAST_THREAD_INFO(lock);  // get the least 24bits info
qnode=GET_TLS(&qnode,tid);       // get the qnode info according to tid and
qnode

-- 
You are receiving this mail because:
You are on the CC list for the bug.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]