This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Synchronizing auxiliary mutex data

On Tue, 20 Jun 2017, Andreas Schwab wrote:

> On Jun 19 2017, Torvald Riegel <> wrote:
> > __owner accesses need to use atomics (which they don't, currently).
> Does that mean mutexes are broken right now?

Plain accesses to fields like __data.owner are fine as long as they all are
within critical sections set up by LLL_MUTEX_{LOCK,UNLOCK}, but there are some
outside of them. So e.g. in nptl/pthread_mutex_lock:

  95   else if (__builtin_expect (PTHREAD_MUTEX_TYPE (mutex)
  96                              == PTHREAD_MUTEX_RECURSIVE_NP, 1))
  97     {
  98       /* Recursive mutex.  */
  99       pid_t id = THREAD_GETMEM (THREAD_SELF, tid);
 101       /* Check whether we already hold the mutex.  */
 102       if (mutex->__data.__owner == id)
 103         {
 104           /* Just bump the counter.  */
 105           if (__glibc_unlikely (mutex->__data.__count + 1 == 0))
 106             /* Overflow of the counter.  */
 107             return EAGAIN;
 109           ++mutex->__data.__count;
 111           return 0;
 112         }
 114       /* We have to get the mutex.  */
 115       LLL_MUTEX_LOCK (mutex);
 117       assert (mutex->__data.__owner == 0);

afaict the access at line 102 can invoke undefined behavior due to a data race.

In practice I think it works fine because the compiler doesn't tear the load,
and the hardware also doesn't tear the load. I think in this specific example
no-tearing guarantee is sufficient: if this thread never owned this mutex, it
cannot observe __owner == id, or if it is no longer an owner, it stored 0 to
__owner during unlock, followed by a release barrier, and will now observe the
result of that or any later store.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]