This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: why does rwlock prefer readers by default?


On Sat, May 24, 2014 at 11:48:42AM -0400, Rich Felker wrote:
> On Sat, May 24, 2014 at 05:26:18PM +0200, OndÅej BÃlka wrote:
> > > > As far as I know, there is no way to prefer writers but allow
> > > > recursive locking by readers (which the standard requires be allowed)
> > > > without unbounded memory usage (to track which threads already own a
> > > > read lock). The problem is that you can't distinguish a new reader
> > > > from an existing reader performing a recurive lock and thus you have
> > > > to allow both, even if a writer is waiting. Please correct me if I'm
> > > > wrong on this.
> > > 
> > > Yes you need space to track which readers have acquired which rwlock.
> > > At some number of recursive/rwlocks, there will be a runtime overhead.
> > > 
> > > Irrespective of that, there's another case where we can prefer readers
> > > or writers, as I've described elsewhere in the thread.
> > 
> > That means that you need a malloc/mmap from some size which complicates
> > implementation.
> 
> It also means you need to handle the case where it fails.
> 
> > With readers we let kernel handle that.
> 
> The kernel? There's only one futex object for an individual contended
> address, and if it can't be created, futex wait will return an error
> and then you just spin.
>
Do you have documentation? My point was that futex object needs to
remember which threads to wake. Could you cause spinlocks for multiple
threads in current implementation, see  nptl/pthread_rwlock_wrlock.c:75 
 
> > A argument that you need unbounded memory usage is misleading as this is
> > bounded by number of threads and contention of many readers/writers is bigger
> > issue than allocating few extra bytes.
> 
> The number of threads is essentially unbounded (limit is either ~500
> million or ~1 billion, I forget which).
> 
Still a thread needs its stack which is at least one page. 

> > Also you only need to prefer writers with high probability. A simple
> 
> This is not what the requirement says, but I agree it might be a
> reasonable trade-off.
> 
> > idea is add a thread local counter that counts number of read locks of
> > all rwlock. When you are reader and there is pending writer then you first
> > check counter and yield when it is zero. This works as long as thread
> > does not use two rwlocks simultaneously.
> 
> I agree this works; provided your counter is 64-bit, there's no way to
> increment it so many times that it overflows.
> 
> > To detect these you use bloom
> > filter: split counter to 8 8-bit counters, for each rwlock randomly generate 
> > mask which counters it uses and check if all these counters are zero then yield.
> 
> I don't see how this is better than choosing a single-bit mask.
> Multiple bits just makes it more-likely to prefer a reader when you
> shouldn't.
> 
A problem is how you clear mask when you stop using one lock.

> > There is techical problem that if a counter reaches 255 we could not
> > decrease/increase it but it should not happen in practice.
> 
> What do you do when it does?
> 
Keep that 255, it only causes prefering readers for that thread.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]