This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [PATCH 3/3] [PowerPC] Fix debug register issues in ppc-linux-nat
- From: Pedro Franco de Carvalho <pedromfc at linux dot ibm dot com>
- To: Ulrich Weigand <uweigand at de dot ibm dot com>
- Cc: gdb-patches at sourceware dot org
- Date: Thu, 08 Aug 2019 17:27:29 -0300
- Subject: Re: [PATCH 3/3] [PowerPC] Fix debug register issues in ppc-linux-nat
- References: <20190808162423.C889CD802EF@oc3748833570.ibm.com>
"Ulrich Weigand" <uweigand@de.ibm.com> writes:
> This looks generally good to me, just two questions:
>
> - As mentioned in the 1/3 patch, why do you need the low_new_clone
> callback? As I understand it, you'll get low_new_thread called
> immediatedly afterwards, which will mark the thread as "stale",
> and once it is scheduled again, all debug regs will be set up
> from scratch anyway ...
The reason I did this is so that we have the lwp object of the parent
thread, so that we can copy the correct debug register state. The
arguments for low_new_thread don't include the parent. I think other
targets always know how to clear all the debug registers without keeping
track of anything, but we need to know which slots might already be
installed in a new thread.
Another reason is that add_lwp (and therefore low_new_thread) is also
called in cases other than a ptrace clone event.
One alternative solution is to use low_new_thread and iterate through
all the known lwps in the same thread group, and copy state of installed
slots (m_installed_hw_bps) from all threads to the state for the new
thread. This should be sufficient for low_prepare_to_resume, since we
just delete every slot ignoring ENOENT errors there. Wold something
like this make sense? However, I'm not sure if this is robust enough to
work even when add_lwp is used in other cases.
> - We currently do not support hardware watchpoints in gdbserver,
> even though we really should. Ideally, the low-level code to
> handle debug regs should be shared between gdb and gdbserver,
> as is done e.g. on x86. Now, I'm not saying that handling
> gdbserver is a pre-req for this patch (fixing GDB first is of
> course fine!), but I'm wondering if it would make sense, given
> that you're refactoring a lot of this code anyway, to think
> about whether this setup would help or hinder a future merge
> with gdbserver.
Ok, I'll review this and see if this can be easily ported to gdbserver.
Thanks!
--
Pedro Franco de Carvalho