This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] Optimize libc_lock_lock for MIPS XLP.
- From: Maxim Kuvyrkov <maxim at codesourcery dot com>
- To: Chris Metcalf <cmetcalf at tilera dot com>
- Cc: "Joseph S. Myers" <joseph at codesourcery dot com>, GLIBC Devel<libc-alpha at sourceware dot org>, <libc-ports at sourceware dot org>, Tom de Vries<vries at codesourcery dot com>
- Date: Fri, 15 Jun 2012 13:20:35 +1200
- Subject: Re: [PATCH] Optimize libc_lock_lock for MIPS XLP.
- References: <FC4EF172-B43E-4298-A2E9-681FA28650DB@mentor.com> <4FD9DB74.8080905@tilera.com>
On 15/06/2012, at 12:39 AM, Chris Metcalf wrote:
> On 6/14/2012 1:03 AM, Maxim Kuvyrkov wrote:
>> These two patches (libc part and ports part) optimize libc_lock_lock() macro that GLIBC uses for locking internally to take advantage of fetch_and_add instruction that is available as an extension on certain processors, e.g., MIPS-architecture XLP.
>>
>> The libc_lock_lock macros implement boolean lock: 0 corresponds to unlocked state and non-zero corresponds to locked state.
>
> Just to be clear, if you put this comment somewhere when you commit, you
> should say locks are tristate, where 0 is unlocked, 1 is locked and
> uncontended, and >1 is locked and contended.
Right, it's all coming back now. I will update the comments to mention this. [This optimization was written around 6 months ago, and not by me. This and below points are worth elaborating on, thanks for bringing them up.]
I've CC'ed Tom de Vries, who is the original author of patch. Tom, please let us know if I'm misrepresenting the optimization or the rationale for its correctness.
>
>> It is, therefore, possible to use fetch_and_add semantics to acquire lock in libc_lock_lock. For XLP this translates to a single LDADD instruction. This optimization allows architectures that can perform fetch_and_add faster than compare_and_exchange, such situation is indicated by defining the new macro "lll_add_lock".
>>
>> The unlocking counterpart doesn't require any change as it is already uses plain atomic_exchange operation, which, incidentally, also supported on XLP as a single instruction.
>
> This seems like it would work well for a single thread acquiring the lock,
> but I have some questions about it in the presence of multiple threads
> trying to acquire the lock.
>
> First, the generic __lll_lock_wait() code assumes the contended value is
> exactly "2".
Um, not exactly. __lll_lock_wait() *sets* the contended lock to a value of "2", but it will work as well with >2 values.
void
__lll_lock_wait (int *futex, int private)
{
if (*futex == 2)
lll_futex_wait (futex, 2, private);
while (atomic_exchange_acq (futex, 2) != 0)
lll_futex_wait (futex, 2, private);
}
> So if two or more threads both try and fail to acquire the
> lock, the value will be >2. This will cause the waiters to busywait,
> spinning on atomic exchange instructions, rather than calling into
> futex_wait().
As I read it, in case of a contended lock __lll_lock_wait will reset the value of the lock to "2" before calling lll_futex_wait(). I agree that there is a timing window in which the other threads will see a value of the lock greater than "2", but the value will not get as high as hundreds or billions as it will be constantly reset to "2" in atomic_exchange in lll_lock_wait().
I do not see how threads will get into a busywait state, though. Would you please elaborate on that?
> I think it might be possible to change the generic code to
> support the more general ">1" semantics of contended locks, but it might be
> a bit less efficient, so you might end up wanting to provide overrides for
> these functions on MIPS. Even on MIPS it might result in a certain amount
> of spinning since you'd have to hit the race window correctly to feed the
> right value of the lock to futex_wait.
>
> Second, if a lock is held long enough for 4 billion threads to try to
> acquire it and fail, you will end up with an unlocked lock. :-) I'm not
> sure how likely this seems, but it is a potential issue. You might
> consider, for example, doing a cmpxchg on the contended-lock path to try to
> reset the lock value back to 2 again; if it fails, it's not a big deal,
> since statistically I would expect the occasional thread to succeed, which
> is all you need.
Thank you,
--
Maxim Kuvyrkov
CodeSourcery / Mentor Graphics