This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH v2] PowerPC: Fix a race condition when eliding a lock
- From: Adhemerval Zanella <adhemerval dot zanella at linaro dot org>
- To: Carlos O'Donell <carlos at redhat dot com>, Tulio Magno Quites Machado Filho <tuliom at linux dot vnet dot ibm dot com>, triegel at redhat dot com
- Cc: libc-alpha at sourceware dot org, munroesj at linux dot vnet dot ibm dot com
- Date: Thu, 20 Aug 2015 16:51:47 -0300
- Subject: Re: [PATCH v2] PowerPC: Fix a race condition when eliding a lock
- Authentication-results: sourceware.org; auth=none
- References: <55BAEE77 dot 9000501 at redhat dot com> <1440084054-32243-1-git-send-email-tuliom at linux dot vnet dot ibm dot com> <55D628C2 dot 3080305 at redhat dot com>
>> +
>> +/* Returns 0 if the lock defined by is_lock_free was elided.
>> + ADAPT_COUNT is a per-lock state variable. */
>> +# define ELIDE_LOCK(adapt_count, is_lock_free) \
>> + ({ \
>> + int ret = 0; \
>> + if (adapt_count > 0) \
>> + (adapt_count)--; \
>> + else \
>> + for (int i = __elision_aconf.try_tbegin; i > 0; i--) \
>> + { \
>> + if (__builtin_tbegin (0)) \
>> + { \
>> + if (is_lock_free) \
>
> This should use a relaxed MO atomic access to highlight that this data *is*
> shared between threads, but that there are no ordering guarantees. It should
> result in a normal load, but it makes it clear in the code that some other
> thread is also accessing this data, and that the transaction as a barrier
> is all that is protecting this from being a data-race (undefined behaviour).
Should we really highlight it using atomic operations? The idea is exactly
that any access within a transaction will not result in undefined behavior,
so no need to atomic for synchronization.
>
> Torvlad made the same comment in his review, and I'm just repeating it here
> because I think it bears repeating.
>
>> + { \
>> + ret = 1; \
>> + break; \
>> + } \
>> + __builtin_tabort (_ABORT_LOCK_BUSY); \
>> + } \
>> + else \
>> + if (!__get_new_count(&adapt_count)) \
>> + break; \
>> + } \
>> + ret; \
>> + })
>>
>> # define ELIDE_TRYLOCK(adapt_count, is_lock_free, write) \
>> - __elide_trylock (&(adapt_count), is_lock_free, write)
>> + ({ \
>> + int ret = 0; \
>> + if (__elision_aconf.try_tbegin > 0) \
>> + { \
>> + if (write) \
>> + __builtin_tabort (_ABORT_NESTED_TRYLOCK); \
>> + ret = ELIDE_LOCK (adapt_count, is_lock_free); \
>> + } \
>> + ret; \
>> + })
>>
>>
>> static inline bool
>>
>
>
> Cheers,
> Carlos.
>