This is the mail archive of the libc-hacker@sources.redhat.com mailing list for the glibc project.
Note that libc-hacker is a closed list. You may look at the archives of this list, but subscription and posting are not open.
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |
Other format: | [Raw text] |
Hi! This patch adds a mutex wakeup if there are still waiters at pthread_cond_destroy time. The reason is to make sure pthread_cond_destroy won't block for too long. If some threads are blocked on the pthread_mutex_t's __lock, it is under application control when (if ever) they will be woken up and pthread_cond_destroy would block for that whole time. By waking all mutex waiters we are just waiting until the scheduler gives all threads enough timeslice to acquire the condvar internal lock, hop through the short critical section and release the lock (last thread after waking up pthread_cond_destroy). In most programs nwaiters will be < (1 << COND_CLOCK_BITS) at pthread_cond_destroy time and thus this patch shouldn't cause performance regressions. 2004-09-02 Jakub Jelinek <jakub@redhat.com> * pthread_cond_destroy.c (__pthread_cond_destroy): If there are waiters, awake all waiters on the associated mutex. --- libc/nptl/pthread_cond_destroy.c.jj 2004-09-02 22:27:56.000000000 +0200 +++ libc/nptl/pthread_cond_destroy.c 2004-09-02 23:33:43.278763736 +0200 @@ -44,15 +44,35 @@ __pthread_cond_destroy (cond) broadcasted, but still are using the pthread_cond_t structure, pthread_cond_destroy needs to wait for them. */ unsigned int nwaiters = cond->__data.__nwaiters; - while (nwaiters >= (1 << COND_CLOCK_BITS)) + + if (nwaiters >= (1 << COND_CLOCK_BITS)) { - lll_mutex_unlock (cond->__data.__lock); + /* Wake everybody on the associated mutex in case there are + threads that have been requeued to it. + Without this, pthread_cond_destroy could block potentially + for a long time or forever, as it would depend on other + thread's using the mutex. + When all threads waiting on the mutex are woken up, pthread_cond_wait + only waits for threads to acquire and release the internal + condvar lock. */ + if (cond->__data.__mutex != NULL + && cond->__data.__mutex != (void *) ~0l) + { + pthread_mutex_t *mut = (pthread_mutex_t *) cond->__data.__mutex; + lll_futex_wake (&mut->__data.__lock, INT_MAX); + } + + do + { + lll_mutex_unlock (cond->__data.__lock); - lll_futex_wait (&cond->__data.__nwaiters, nwaiters); + lll_futex_wait (&cond->__data.__nwaiters, nwaiters); - lll_mutex_lock (cond->__data.__lock); + lll_mutex_lock (cond->__data.__lock); - nwaiters = cond->__data.__nwaiters; + nwaiters = cond->__data.__nwaiters; + } + while (nwaiters >= (1 << COND_CLOCK_BITS)); } return 0; Jakub
Index Nav: | [Date Index] [Subject Index] [Author Index] [Thread Index] | |
---|---|---|
Message Nav: | [Date Prev] [Date Next] | [Thread Prev] [Thread Next] |