[PATCH 3/3] [PowerPC] Fix debug register issues in ppc-linux-nat

Pedro Franco de Carvalho pedromfc@linux.ibm.com
Thu Aug 8 20:27:00 GMT 2019


"Ulrich Weigand" <uweigand@de.ibm.com> writes:

> This looks generally good to me, just two questions:
>
> - As mentioned in the 1/3 patch, why do you need the low_new_clone
>   callback?  As I understand it, you'll get low_new_thread called
>   immediatedly afterwards, which will mark the thread as "stale",
>   and once it is scheduled again, all debug regs will be set up
>   from scratch anyway ...

The reason I did this is so that we have the lwp object of the parent
thread, so that we can copy the correct debug register state.  The
arguments for low_new_thread don't include the parent.  I think other
targets always know how to clear all the debug registers without keeping
track of anything, but we need to know which slots might already be
installed in a new thread.

Another reason is that add_lwp (and therefore low_new_thread) is also
called in cases other than a ptrace clone event.

One alternative solution is to use low_new_thread and iterate through
all the known lwps in the same thread group, and copy state of installed
slots (m_installed_hw_bps) from all threads to the state for the new
thread.  This should be sufficient for low_prepare_to_resume, since we
just delete every slot ignoring ENOENT errors there.  Wold something
like this make sense?  However, I'm not sure if this is robust enough to
work even when add_lwp is used in other cases.

> - We currently do not support hardware watchpoints in gdbserver,
>   even though we really should.  Ideally, the low-level code to
>   handle debug regs should be shared between gdb and gdbserver,
>   as is done e.g. on x86.  Now, I'm not saying that handling
>   gdbserver is a pre-req for this patch (fixing GDB first is of
>   course fine!), but I'm wondering if it would make sense, given
>   that you're refactoring a lot of this code anyway, to think
>   about whether this setup would help or hinder a future merge
>   with gdbserver.

Ok, I'll review this and see if this can be easily ported to gdbserver.

Thanks!

--
Pedro Franco de Carvalho



More information about the Gdb-patches mailing list