This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] for ex11 fails on SMP systems


On Mon, 2002-05-13 at 11:27, Steven Munroe wrote:

> On Mon, 2002-05-13 at 11:27, Steven Munroe wrote:
>
> > Re "Example ex11 fails on SMP systems" we found that the test
> >
> >          if (self->p_nextlock != NULL)
> >
> > in pthread_lock of spinlock.c was not sufficient to detect
> > whether the pthread_descr was on the spinlock wait list or
> > not. [...]
>
> I cannot see a problem from your description.  Give an example of
> problematic code paths.

There are two parts to the problem with ex11 (and timed rwlock):

Losing the wake-up signal due to early exit from pthread_lock ...

http://sources.redhat.com/ml/libc-alpha/2002-04/msg00181.html

and circular wait list causing a infinite loop in pthread_unlock ...

http://sources.redhat.com/ml/libc-alpha/2002-04/msg00199.html

Net: rwlock.c uses spinlock.c internally to control access to its wait
queue. Both rwlock and spinlock and use the same signal to wakeup waiting
threads and sometimes spinlock get the signal intended for timed-out
rwlocks. This causes spinlock to take a "spurious wake-up" signal, which it
should regenerate before pthread_lock exits. But a performance improvement
added for SMPs (cvs version 1.23), introduced an "early return" before the
main suspend retry loop. This change bypassed the "spurious wakeup"
regeneration logic at the end of pthread_lock. This results in a hang (
pthread_rw_lock_timedrdlock waits indefinitely for the signal that was
swallowed by pthread_lock ).

My fix was to move the label "again:" from just before:

  /* On SMP, try spinning to get the lock. */

to just after:

  /* No luck, try once more or suspend. */

This removes the "early return" from the retry loop and insures that
"spurious wakeup" signals will be regenerated.

This fixed the "pthread_rw_lock_timedrdlock" hang but exposed a deeper bug
in spinlocks wait list handling. In pthread_lock:

  /* Suspend with guard against spurious wakeup.
     ... */

  if (!successful_seizure) {
    for (;;) {
      suspend(self);
      if (self->p_nextlock != NULL) {
      /* Count resumes that don't belong to us. */
      spurious_wakeup_count++;
      continue;
      }
      break;
    }
    goto again;
  }

  /* Put back any resumes we caught that don't belong to us. */
  while (spurious_wakeup_count--)
    restart(self);

The (self->p_nextlock == NULL) implies that pthread_unlock has selected
this thread for wakeup (removing this thread from the wait list), marked
the spinlock available, and signaled this thread to resume.

(self->p_nextlock == NULL) also marks the end of the wait list. So the
other possible condition is this thread was first/only on the wait lists
(at the end of the list), is still on the wait list, and the signal was
(really) a "spurious wakeup". In this case the retry will find that
spinlock_t still busy and re-enqueue this thread on the wait list (creating
a circular list).

The proposed fix leaves the low-order bit on (from newstatus = oldstatus |
1) in the p_nextlock linked list. This insures that (p_nextlock != NULL)
while the this thread is on the wait list and will only become (p_nextlock
== NULL) when pthread_unlock has selected this thread to resume. This
required several additional changes in pthread_unlock to ignore the extra
(low order) bit while scanning the list.

The first patch I sent was bad ( (x & ~1L) != (x & -1L) ). So please use
the patch from:

http://sources.redhat.com/ml/libc-alpha/2002-05/msg00089.html



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]