[PATCH 3/3] [PowerPC] Fix debug register issues in ppc-linux-nat

Ulrich Weigand uweigand@de.ibm.com
Fri Aug 9 15:28:00 GMT 2019


Pedro Franco de Carvalho wrote:
> "Ulrich Weigand" <uweigand@de.ibm.com> writes:
> > I may still be missing something, but why exactly *do* we need to know
> > which slots might already be installed?  I'd have assumed that when we
> > get to low_prepare_to_resume, and the lwp is marked stale, we just throw
> > away everything and install the complete desired state.
> 
> To throw away everything in low_prepare_to_resume, we need to know which
> slots the kernel had assigned to the debug registers we requested,
> because PPC_PTRACE_DELHWDEBUG takes the slot as an argument.  Ideally
> we'd have a ptrace call to clear all the debug register state.

Huh.  I wasn't aware we didn't have such a method, but it does appear
you're correct here.  Weird.

> I considered assuming that the kernel will always use a contiguous range
> of slots from 1 to num_instruction_bps + num_data_bps, and always
> deleting all these slots while ignoring ENODATA errors, but I'm not sure
> if this is a very robust solution.  For instance, I inspected the kernel
> code, and in embedded processors, if you set a ranged breakpoint, this
> will occupy slots 1 and 2, and PPC_PTRACE_SETHWDEBUG will return slot 1.
> You then have to use slot 1 as an argument to PPC_PTRACE_DELHWDEBUG to
> delete the ranged breakpoint.  If you try to delete slot 2 before 1,
> you'll get an EINVAL, and not an ENOENT.  If you delete 1 then 2, you'll
> get ENOENT for 2.  I fact, this case means that the solution I proposed
> in my previous reply of gathering all the slots from all threads in the
> same thread group would not work well (we could get EINVALs).

But it seems what would work reliably is to delete slots from 1 to max,
while ignoring ENOENT, right?  In fact, you don't even need to know the
max slot number, because you'll get EINVAL if and only if you're attempting
to delete the first slot after max (assuming you do it in sequence).

> >> Another reason is that add_lwp (and therefore low_new_thread) is also
> >> called in cases other than a ptrace clone event.
> >
> > Well, yes, but those cases *also* need to be handled, right?  This is
> > e.g. when you attach to an already multi-threaded process while there
> > are already watchpoints set up.  In that case, you'll need to install
> > those watchpoints into all those threads.
> 
> This should already work, since we do set the stale flag in
> low_new_thread, like in other targets.  We just don't copy any debug
> register state from other threads.  So when we next resume the newly
> attached threads, we'll install the watchpoints GDB requested.
> 
> However, I don't think that it's possible to handle cases where a
> previous tracer installed hardware breakpoints and watchpoints and then
> detached without removing them.

The above method ought to handle that as well, I think.

Bye,
Ulrich

-- 
  Dr. Ulrich Weigand
  GNU/Linux compilers and toolchain
  Ulrich.Weigand@de.ibm.com



More information about the Gdb-patches mailing list