This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[PATCH] for ex11 fails on SMP systems


Re "Example ex11 fails on SMP systems" we found that the test

	if (self->p_nextlock != NULL)

in pthread_lock of spinlock.c was not sufficient to detect 
whether the pthread_descr was on the spinlock wait list or 
not. Specifically pthread_unlock sets 
"self->p_nextlock = NULL" before restarting a suspended 
thread. However this condition also holds when the waiting 
thread is the last thread or only thread on the wait list. 
This "false positive" was causing (in conjunction with the 
spurious_wakeup) a thread, already on the wait list, to be 
enqueued again. 

To address this problem I needed to guarantee that the 
->p_nextlock of any waiting thread would never be zero 
(NULL). the following patch follows fixes this hang.

diff -rc2P glibc-2.2.5/ChangeLog glibc-2.2.5-pthreads/ChangeLog
*** glibc-2.2.5/ChangeLog	Sun Jan 20 21:20:18 2002
--- glibc-2.2.5-pthreads/ChangeLog	Wed Apr 24 16:49:24 2002
***************
*** 1,2 ****
--- 1,9 ----
+ 2002-04-24  Steven Munroe  <sjmunroe@us.ibm.com>
+ 
+ 	* linuxthreads/spinlock.c
+ 	Fixed race conditions in spinlock.c related to "spurious_wakeups"
+ 	generated by timed rwlocks. This fixes a hang in 
+ 	linuxthreads/Examples/ex11 during make check.
+ 
  2002-01-18  Andreas Schwab  <schwab@suse.de>
  
diff -rc2P glibc-2.2.5/linuxthreads/spinlock.c glibc-2.2.5-pthreads/linuxthreads/spinlock.c
*** glibc-2.2.5/linuxthreads/spinlock.c	Wed Aug 29 21:11:19 2001
--- glibc-2.2.5-pthreads/linuxthreads/spinlock.c	Tue Apr 23 11:11:19 2002
***************
*** 88,93 ****
    spin_count = 0;
  
- again:
- 
    /* On SMP, try spinning to get the lock. */
  
--- 88,91 ----
***************
*** 117,120 ****
--- 115,120 ----
    }
  
+ again:
+ 
    /* No luck, try once more or suspend. */
  
***************
*** 133,137 ****
  
      if (self != NULL) {
!       THREAD_SETMEM(self, p_nextlock, (pthread_descr) (oldstatus & ~1L));
        /* Make sure the store in p_nextlock completes before performing
           the compare-and-swap */
--- 133,137 ----
  
      if (self != NULL) {
!       THREAD_SETMEM(self, p_nextlock, (pthread_descr) (oldstatus));
        /* Make sure the store in p_nextlock completes before performing
           the compare-and-swap */
***************
*** 217,221 ****
      }
      ptr = &(thr->p_nextlock);
!     thr = *ptr;
    }
  
--- 217,221 ----
      }
      ptr = &(thr->p_nextlock);
!     thr = (pthread_descr)((long)(thr->p_nextlock) & ~1L);
    }
  
***************
*** 229,233 ****
      thr = (pthread_descr) (oldstatus & ~1L);
      if (! __compare_and_swap_with_release_semantics
! 	    (&lock->__status, oldstatus, (long)(thr->p_nextlock)))
        goto again;
    } else {
--- 229,233 ----
      thr = (pthread_descr) (oldstatus & ~1L);
      if (! __compare_and_swap_with_release_semantics
! 	    (&lock->__status, oldstatus, (long)(thr->p_nextlock) & -1L))
        goto again;
    } else {
***************
*** 235,239 ****
         But in this case we must also flip the least significant bit
         of the status to mark the lock as released. */
!     thr = *maxptr;
      *maxptr = thr->p_nextlock;
  
--- 235,239 ----
         But in this case we must also flip the least significant bit
         of the status to mark the lock as released. */
!     thr = (pthread_descr)((long)*maxptr & -1L);
      *maxptr = thr->p_nextlock;
  


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]