This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][BZ 18743] PowerPC: Fix a race condition when eliding a lock

On 30-07-2015 23:06, Tulio Magno Quites Machado Filho wrote:
> Adhemerval Zanella <> writes:
>>> The previous code used to evaluate the preprocessor token is_lock_free to
>>> a variable before starting a transaction.  This behavior can cause an
>>> error if another thread got the lock (without using a transaction)
>>> between the conversion of the token and the beginning of the transaction.
>>> This patch delays the evaluation of is_lock_free to inside a transaction
>>> by moving this part of the code to the macro ELIDE_LOCK.
>> Could you elaborate more why this is a race-condition on the is_lock_free
>> variable? My understanding is 'is_lock_free' is check iff the transaction
>> already started and it should be safe since 'tbegin.' creates a memory
>> barrier that immediately precedes the transaction (ISA 2.07, Book II, 
>> Chapter 1.8):
> I completely agree with you here.  The problem is just where the memory access
> happens.
> And we're dealing with 2 different problems related to the memory access.
> Let me use your example to explain the first one:
>>  40       if (__builtin_tbegin (0))
>>  41         {
>>  42           if (is_lock_free)
> If we consider that the memory access happens in this line, there wouldn't be
> a problem.  But in this context, is_lock_free is just an int and not the lock
> struct we need to access.
> In this context, is_lock_free is defined here:
>   29 static inline bool
>   30 __elide_lock (uint8_t *adapt_count, int is_lock_free)
> Keep in mind this macro:
>   69 # define ELIDE_LOCK(adapt_count, is_lock_free) \
>   70   __elide_lock (&(adapt_count), is_lock_free)
> Using pthread_rwlock_rdlock as an example, is_lock_free would be a preprocessor
> token with the following contents:
>  128   if (ELIDE_LOCK (rwlock->__data.__rwelision,
>  129                   rwlock->__data.__lock == 0
>  130                   && rwlock->__data.__writer == 0
>  131                   && rwlock->__data.__nr_readers == 0))
> And this is where the important memory access happens.
> If one of these fields are changed by another thread after this point, but
> before __builtin_tbegin, we're able to reproduce the first problem I mentioned
> previously.

My understanding is this fields will be changed only in the lock taken path. Let's
say a thread t1 grab the lock (either by a transaction or by using a default lock)
and modifies any 'rwlock->__data' fields after the ELIDE_LOCK check and before
the transaction begin on a second thread t2.  My understanding of the issue you
are seeing is the transaction will initiate and since is_lock_free is 1 it won't
abort and continue as the a lock taken in both threads.

However my understanding is the transaction won't be able to commit since it
will have a conflict in the 'rwlock->__data.lock' or in any memory being access
in the critical region.  It will fallback to either retry the transaction or
take the normal lock. Are you saying that the transaction is not being aborted
in the critical region?


>> So if the transaction fails it will just jump to else block.
>>> +    else								\
>>> +      for (int i = __elision_aconf.try_tbegin; i > 0; i--)		\
>>> +	{								\
>>> +	  asm volatile("" ::: "memory");				\
>> I can't really understand the requirement of this compiler barrier here.  If
>> compiler is moving around the 'is_lock_free' *before* __builtin_tbegin IMHO
>> this is a compiler bug.
> This is the second problem and I agree with you again.  IMHO,  __builtin_tbegin
> starts a critical path and the compiler should not be moving a memory access out
> of the critical path or into the critical path.
> However, as the current implementations of GCC have this "issue" (some people
> may not agree with us), I believe we could carry this compiler barrier at least
> until all GCC versions supporting __builtin_tbegin are fixed.
> I'm planning to report this to the GCC community.

If compiler is doing memory reordering around HTM builtin it is *clearly* a bug
since transaction begin/end acts a full memory barrier (sync).  You should also
add a compiler barrier in pthread elision code as well.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]