This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2] rwlock: Fix explicit hand-over.


On Mon, 2017-03-27 at 12:09 -0400, Waiman Long wrote:
> On 03/25/2017 07:01 PM, Torvald Riegel wrote:
> > On Sat, 2017-03-25 at 21:17 +0100, Florian Weimer wrote:
> >> * Torvald Riegel:
> >>
> >>> +  bool registered_while_in_write_phase = false;
> >>>    if (__glibc_likely ((r & PTHREAD_RWLOCK_WRPHASE) == 0))
> >>>      return 0;
> >>> +  else
> >>> +    registered_while_in_write_phase = true;
> >> Sorry, this doesn't look quite right.  Isn't
> >> registered_while_in_write_phase always true?
> > Attached is a v2 patch.  It's the same logic, but bigger.  Most of this
> > increase is due to reformatting, but I also adapted some of the
> > comments.
> > I get two failures, but I guess these are either due to the bad internet
> > connectivity I currently have, or something at the resolver.
> > FAIL: resolv/mtrace-tst-leaks
> > FAIL: resolv/tst-leaks
> >
> >
> I have verified that the v2 patch did fix the hang that I saw with my
> microbenchmark. I also observed an increase in performance in the new
> rwlock code compared with the old one before the major rewrite.

Thanks!

> On a
> 4-socket 40-core 80-thread system, 80 parallel locking threads had an
> average per-thread throughput of 32,584 ops/s. The old rwlock code had a
> throughput of 13,411 only. So there is a more than 1.4X increase in
> performance.

Is that with the 50% reads / 50% writes workload (per thread), empty
critical sections, and no delay between critical sections?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]