This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PING][PATCH v3] Add pretty printers for the NPTL lock types


On Wed, 2015-11-18 at 15:55 -0300, Martin Galvan wrote:
> On Wed, Nov 18, 2015 at 3:29 PM, Torvald Riegel <triegel@redhat.com> wrote:
> > I haven't checked consistency with glibc's implementation in detail.  I
> > really think we should have tests for that instead of having to do that
> > manually.
> 
> I looked at how the glibc tests are implemented, and they seem to be
> all written strictly in C. Tom said in a previous e-mail that the
> libstdc++ printers have unit tests which run gdb and so on, but they
> seem to use DejaGNU, which I didn't see in glibc. FWIW I'd rather not
> write TCL and instead use something like PExpect, but that's just me.
> What do you suggest?

I have no concrete suggestions right now.  Is there a stub that could be
used to avoid having to test through gdb?

> > There is still a good amount of glibc implementation internals mirrored
> > in the pretty printers implementation.  It would be nice if this would
> > not be the case, but I haven't thought about whether/what might improve
> > this.
> 
> I don't think that's possible. By definition, the printers need to
> understand how the NPTL data structures work in order to print useful
> information about them.

There's still a question of how and where that implementation knowledge
is encapsulated.  For example, glibc could potentially provide pretty
printers by itself close to the actual implementation, or have more
helper functions that serve as getters for the attributes you test (eg,
assuming that some flag is nonzero meaning that it's process-shared).
But I'm not sure whether we want to go to that way.

> >> +    def read_status(self):
> >> +        """Read the mutex's status.
> >> +
> >> +        For architectures which support lock elision, this method reads
> >> +        whether the mutex is actually locked (i.e. it may show it as unlocked
> >> +        even after calling pthread_mutex_lock).
> >
> > Is an uncommitted transaction actually visible to debuggers?  It may if
> > the debugger is injecting code into the binary, so that no interrupt or
> > context switch happen.  It may also be visible if we have something like
> > a suspendable transaction, and the debugger thinks it can peek inside of
> > txns.  I believe that currently, we can assume that uncommitted state is
> > invisible even to the debugger.
> 
> It should be, although I didn't dig that deeply into this.
> 
> > Either way, "actually locked" in the comment isn't correct.  The mutex
> > is actually locked when using lock elision, it's just that this isn't
> > reflected in memory necessarily.
> 
> Ok. Will rewrite this to "this method reads whether the mutex appears
> as locked in memory".

OK.

> >> +        if self.lock == PTHREAD_MUTEX_UNLOCKED:
> >> +            self.values.append(('Status', 'Unlocked'))
> >> +        else:
> >> +            if self.lock & FUTEX_WAITERS:
> >> +                self.values.append(('Status', 'Locked, possibly with waiters'))
> >> +            else:
> >> +                self.values.append(('Status', 'Locked, no waiters'))
> >
> > I think those strings should consider that there might be waiters that
> > just spin.  When waiters register, it is because they want to wait using
> > futexes.  Right now, we don't spin, but this should change eventually,
> > or is already the case for adaptive mutexes.  I suggest changing the
> > strings to something like "locked, possibly with waiters" and "locked,
> > possibly no waiters" to reflect that.
> 
> I will, although maybe that could be a bit confusing for the user?

IMO, even that would be better than showing an "obviously" clear
statement that is subtly wrong.  What I want to avoid is bug reports
where people say that other threads don't try to acquire the lock
according to gdb although they do.

> >> +        self.values.append(('Threads waiting for this condvar',
> >> +                            self.nwaiters >> COND_NWAITERS_SHIFT))
> >
> > Generally, the same applies as for mutexes.  However, it's not the case
> > in the current condvar implementation, and we'll replace that before we
> > add any substantial spinning.
> 
> Should I rewrite this as "Threads possibly waiting for this condvar"?

No, in that particular case it's fine for now.  Once we change the
implementation, the data structure will change too, so we'll have to
revisit the condvar pretty printer anyway.  It's different for the
mutexes, where we'll likely do more spinning without changing the data.

> >> +-- These values are hardcoded as well:
> >> +-- Value of __mutex for shared condvars.
> >> +PTHREAD_COND_SHARED             ~0l
> >
> > Will this do the right thing for 32b, 64b, and x32?
> 
> It should. After all, those values are hardcoded just like that on the
> NPTL code, which is compiled the same way as the output of
> gen-py-const.awk.

Ah, I see; so gen-py-const.awk will transform that into a specific
number (without any maximum size), and python will compare against that
very same number, right?


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]