[PATCH v2 03/19] nptl: Handle robust PI mutexes for !__ASSUME_SET_ROBUST_LIST
Florian Weimer
fweimer@redhat.com
Thu Aug 26 09:42:14 GMT 2021
* Adhemerval Zanella:
> The robust PI mutexes are signaled by setting the LSB bit to 1, so
> the code requires to take this consideration before access the
> __pthread_mutex_s.
>
> The code is also simplified: the initialization code is not really
> required, PD->robust_head.list and PD->robust_list.__next are
> essentially the same regardless of __PTHREAD_MUTEX_HAVE_PREV, the futex
> wake is optimized to be issued only when required, and the futex shared
> bit is set only when required.
Is this a user-visible bug? Should it have a bug reference?
> diff --git a/nptl/pthread_create.c b/nptl/pthread_create.c
> index d8ec299cb1..08e5189ad6 100644
> --- a/nptl/pthread_create.c
> +++ b/nptl/pthread_create.c
> @@ -486,35 +486,36 @@ start_thread (void *arg)
> exit (0);
>
> #ifndef __ASSUME_SET_ROBUST_LIST
> - /* If this thread has any robust mutexes locked, handle them now. */
> -# if __PTHREAD_MUTEX_HAVE_PREV
> - void *robust = pd->robust_head.list;
> -# else
> - __pthread_slist_t *robust = pd->robust_list.__next;
> -# endif
> - /* We let the kernel do the notification if it is able to do so.
> - If we have to do it here there for sure are no PI mutexes involved
> - since the kernel support for them is even more recent. */
> - if (!__nptl_set_robust_list_avail
> - && __builtin_expect (robust != (void *) &pd->robust_head, 0))
> + /* We let the kernel do the notification if it is able to do so on the exit
> + syscall. Otherwise we need to handle before the thread terminates. */
> + void **robust;
> + while ((robust = pd->robust_head.list)
> + && robust != (void *) &pd->robust_head)
> {
> - do
> + /* Note: robust PI futexes are signaled by setting bit 0. */
> + void **robustp = (void **) ((uintptr_t) robust & ~1UL);
> +
> + struct __pthread_mutex_s *mtx = (struct __pthread_mutex_s *)
> + ((char *) robustp - offsetof (struct __pthread_mutex_s,
> + __list.__next));
> + unsigned int nusers = mtx->__nusers;
> + int shared = mtx->__kind & 128;
> +
> + pd->robust_head.list_op_pending = robust;
> + pd->robust_head.list = *robustp;
> + /* Although the list will not be changed at this point, it follows the
> + expected kernel ABI. */
> + __asm ("" ::: "memory");
> +
> + int lock = atomic_exchange_relaxed (&mtx->__lock, FUTEX_OWNER_DIED);
> + /* Wake any users if mutex is acquired with potential users. */
> + if (lock > 1 || nusers != 0)
Why the check for nusers? Isn't that racy?
Thanks,
Florian
More information about the Libc-alpha
mailing list