This is the mail archive of the
libc-ports@sources.redhat.com
mailing list for the libc-ports project.
Re: [PATCH] Optimize libc_lock_lock for MIPS XLP.
- From: Chris Metcalf <cmetcalf at tilera dot com>
- To: Maxim Kuvyrkov <maxim_kuvyrkov at mentor dot com>
- Cc: "Joseph S. Myers" <joseph at codesourcery dot com>, GLIBC Devel <libc-alpha at sourceware dot org>, <libc-ports at sourceware dot org>
- Date: Thu, 14 Jun 2012 08:39:16 -0400
- Subject: Re: [PATCH] Optimize libc_lock_lock for MIPS XLP.
- References: <FC4EF172-B43E-4298-A2E9-681FA28650DB@mentor.com>
On 6/14/2012 1:03 AM, Maxim Kuvyrkov wrote:
> These two patches (libc part and ports part) optimize libc_lock_lock() macro that GLIBC uses for locking internally to take advantage of fetch_and_add instruction that is available as an extension on certain processors, e.g., MIPS-architecture XLP.
>
> The libc_lock_lock macros implement boolean lock: 0 corresponds to unlocked state and non-zero corresponds to locked state.
Just to be clear, if you put this comment somewhere when you commit, you
should say locks are tristate, where 0 is unlocked, 1 is locked and
uncontended, and >1 is locked and contended.
> It is, therefore, possible to use fetch_and_add semantics to acquire lock in libc_lock_lock. For XLP this translates to a single LDADD instruction. This optimization allows architectures that can perform fetch_and_add faster than compare_and_exchange, such situation is indicated by defining the new macro "lll_add_lock".
>
> The unlocking counterpart doesn't require any change as it is already uses plain atomic_exchange operation, which, incidentally, also supported on XLP as a single instruction.
This seems like it would work well for a single thread acquiring the lock,
but I have some questions about it in the presence of multiple threads
trying to acquire the lock.
First, the generic __lll_lock_wait() code assumes the contended value is
exactly "2". So if two or more threads both try and fail to acquire the
lock, the value will be >2. This will cause the waiters to busywait,
spinning on atomic exchange instructions, rather than calling into
futex_wait(). I think it might be possible to change the generic code to
support the more general ">1" semantics of contended locks, but it might be
a bit less efficient, so you might end up wanting to provide overrides for
these functions on MIPS. Even on MIPS it might result in a certain amount
of spinning since you'd have to hit the race window correctly to feed the
right value of the lock to futex_wait.
Second, if a lock is held long enough for 4 billion threads to try to
acquire it and fail, you will end up with an unlocked lock. :-) I'm not
sure how likely this seems, but it is a potential issue. You might
consider, for example, doing a cmpxchg on the contended-lock path to try to
reset the lock value back to 2 again; if it fails, it's not a big deal,
since statistically I would expect the occasional thread to succeed, which
is all you need.
--
Chris Metcalf, Tilera Corp.
http://www.tilera.com