This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH][BZ 18743] PowerPC: Fix a race condition when eliding a lock
- From: Adhemerval Zanella <adhemerval dot zanella at linaro dot org>
- To: libc-alpha at sourceware dot org
- Date: Thu, 30 Jul 2015 15:43:09 -0300
- Subject: Re: [PATCH][BZ 18743] PowerPC: Fix a race condition when eliding a lock
- Authentication-results: sourceware.org; auth=none
- References: <1438274936-26493-1-git-send-email-tuliom at linux dot vnet dot ibm dot com>
Hi Tulio,
Some comments below:
On 30-07-2015 13:48, Tulio Magno Quites Machado Filho wrote:
> Note: this patch is updating the NEWS section of glibc 2.22 because that's the
> only one available right now. Whether this patch will end up in glibc 2.23
> or 2.22 has to be defined.
>
> 8<----
>
> The previous code used to evaluate the preprocessor token is_lock_free to
> a variable before starting a transaction. This behavior can cause an
> error if another thread got the lock (without using a transaction)
> between the conversion of the token and the beginning of the transaction.
>
> This patch delays the evaluation of is_lock_free to inside a transaction
> by moving this part of the code to the macro ELIDE_LOCK.
Could you elaborate more why this is a race-condition on the is_lock_free
variable? My understanding is 'is_lock_free' is check iff the transaction
already started and it should be safe since 'tbegin.' creates a memory
barrier that immediately precedes the transaction (ISA 2.07, Book II,
Chapter 1.8):
40 if (__builtin_tbegin (0))
41 {
42 if (is_lock_free)
43 return true;
44 /* Lock was busy. */
45 __builtin_tabort (_ABORT_LOCK_BUSY);
So if the transaction fails it will just jump to else block.
> + else \
> + for (int i = __elision_aconf.try_tbegin; i > 0; i--) \
> + { \
> + asm volatile("" ::: "memory"); \
I can't really understand the requirement of this compiler barrier here. If
compiler is moving around the 'is_lock_free' *before* __builtin_tbegin IMHO
this is a compiler bug.
> + if (__builtin_tbegin (0)) \
> + { \
> + if (is_lock_free) \
> + { \
> + ret = 1; \
> + break; \
> + } \
> + __builtin_tabort (_ABORT_LOCK_BUSY); \
> + } \
> + else \
> + if (!__get_new_count(&adapt_count)) \
> + break; \
> + } \
> + ret; \
> + })
>
> # define ELIDE_TRYLOCK(adapt_count, is_lock_free, write) \
> - __elide_trylock (&(adapt_count), is_lock_free, write)
> + ({ \
> + int ret = 0; \
> + if (__elision_aconf.try_tbegin > 0) \
> + { \
> + if (write) \
> + __builtin_tabort (_ABORT_NESTED_TRYLOCK); \
> + ret = ELIDE_LOCK (adapt_count, is_lock_free); \
> + } \
> + ret; \
> + })
>
>
> static inline bool
>