This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Unify pthread_spin_[try]lock implementations.


On 7/25/2012 4:22 PM, Roland McGrath wrote:
>> The tile architecture is unlikely to use this generic version no matter
>> what; see http://sourceware.org/ml/libc-ports/2012-07/msg00030.html for the
>> details, but the primary point is that in a mesh-based architecture it's a
>> bad idea to ever end up in a situation where all the cores can be spinning
>> issues loads or cmpxchg as fast as they can, so some kind of backoff is
>> necessary.
> I had read that before but only noticed the explanation that the plain
> reads were bad.  (Hence in my suggestion you'd share the code but with a
> #define that means the loop of plain reads would be elided entirely at
> compile time by constant folding.)  What kind of "backoff" do you mean?
> It's probably appropriate on every machine to use "atomic_delay ();" inside
> such loops.

Some work we did a while back found that bounded exponential delay tended
to work best for any kind of spinlock.  So if we fail to acquire the lock
the first time around, we wait a few cycles and try again, then keep
doubling the wait time up to some ceiling (around 1000 cycles or so); see
ports/sysdeps/tile/nptl/pthread_spin_lock.c.

This way in worst-case situation when you have (say) 100 cores all trying
to acquire the lock at once, you don't hose the memory network with
traffic.  This also helps somewhat to avoid unfairness where closer cores
have a dramatically better chance of acquiring the lock due to how wormhole
routing allocates links in the mesh to memory messages.

Of course, the real answer tends to be "don't use simple spinlocks", so in
the kernel, for example, we use ticket locks instead.  But with pthread
spinlocks that's not a great option since if any thread waiting for the
lock is scheduled out for a while, no later thread can acquire the lock either.

-- 
Chris Metcalf, Tilera Corp.
http://www.tilera.com


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]