This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: why does rwlock prefer readers by default?
- From: OndÅej BÃlka <neleai at seznam dot cz>
- To: Rich Felker <dalias at libc dot org>
- Cc: Torvald Riegel <triegel at redhat dot com>, GLIBC Devel <libc-alpha at sourceware dot org>
- Date: Sat, 24 May 2014 18:33:06 +0200
- Subject: Re: why does rwlock prefer readers by default?
- Authentication-results: sourceware.org; auth=none
- References: <1399458831 dot 32485 dot 12625 dot camel at triegel dot csb> <20140507185121 dot GZ26358 at brightrain dot aerifal dot cx> <1399493345 dot 32485 dot 15258 dot camel at triegel dot csb> <20140524152618 dot GA7647 at domone dot leoexpresswifi dot com> <20140524154842 dot GR507 at brightrain dot aerifal dot cx>
On Sat, May 24, 2014 at 11:48:42AM -0400, Rich Felker wrote:
> On Sat, May 24, 2014 at 05:26:18PM +0200, OndÅej BÃlka wrote:
> > > > As far as I know, there is no way to prefer writers but allow
> > > > recursive locking by readers (which the standard requires be allowed)
> > > > without unbounded memory usage (to track which threads already own a
> > > > read lock). The problem is that you can't distinguish a new reader
> > > > from an existing reader performing a recurive lock and thus you have
> > > > to allow both, even if a writer is waiting. Please correct me if I'm
> > > > wrong on this.
> > >
> > > Yes you need space to track which readers have acquired which rwlock.
> > > At some number of recursive/rwlocks, there will be a runtime overhead.
> > >
> > > Irrespective of that, there's another case where we can prefer readers
> > > or writers, as I've described elsewhere in the thread.
> >
> > That means that you need a malloc/mmap from some size which complicates
> > implementation.
>
> It also means you need to handle the case where it fails.
>
> > With readers we let kernel handle that.
>
> The kernel? There's only one futex object for an individual contended
> address, and if it can't be created, futex wait will return an error
> and then you just spin.
>
Do you have documentation? My point was that futex object needs to
remember which threads to wake. Could you cause spinlocks for multiple
threads in current implementation, see nptl/pthread_rwlock_wrlock.c:75
> > A argument that you need unbounded memory usage is misleading as this is
> > bounded by number of threads and contention of many readers/writers is bigger
> > issue than allocating few extra bytes.
>
> The number of threads is essentially unbounded (limit is either ~500
> million or ~1 billion, I forget which).
>
Still a thread needs its stack which is at least one page.
> > Also you only need to prefer writers with high probability. A simple
>
> This is not what the requirement says, but I agree it might be a
> reasonable trade-off.
>
> > idea is add a thread local counter that counts number of read locks of
> > all rwlock. When you are reader and there is pending writer then you first
> > check counter and yield when it is zero. This works as long as thread
> > does not use two rwlocks simultaneously.
>
> I agree this works; provided your counter is 64-bit, there's no way to
> increment it so many times that it overflows.
>
> > To detect these you use bloom
> > filter: split counter to 8 8-bit counters, for each rwlock randomly generate
> > mask which counters it uses and check if all these counters are zero then yield.
>
> I don't see how this is better than choosing a single-bit mask.
> Multiple bits just makes it more-likely to prefer a reader when you
> shouldn't.
>
A problem is how you clear mask when you stop using one lock.
> > There is techical problem that if a counter reaches 255 we could not
> > decrease/increase it but it should not happen in practice.
>
> What do you do when it does?
>
Keep that 255, it only causes prefering readers for that thread.