[PATCH v2 04/29] Step over clone syscall w/ breakpoint, TARGET_WAITKIND_THREAD_CLONED
Pedro Alves
pedro@palves.net
Wed Jul 13 22:24:08 GMT 2022
(A good chunk of the problem statement in the commit log below is
Andrew's, adjusted for a different solution, and for covering
displaced stepping too.)
This commit addresses bugs gdb/19675 and gdb/27830, which are about
stepping over a breakpoint set at a clone syscall instruction, one is
about displaced stepping, and the other about in-line stepping.
Currently, when a new thread is created through a clone syscall, GDB
sets the new thread running. With 'continue' this makes sense
(assuming no schedlock):
- all-stop mode, user issues 'continue', all threads are set running,
a newly created thread should also be set running.
- non-stop mode, user issues 'continue', other pre-existing threads
are not effected, but as the new thread is (sort-of) a child of the
thread the user asked to run, it makes sense that the new threads
should be created in the running state.
Similarly, if we are stopped at the clone syscall, and there's no
software breakpoint at this address, then the current behaviour is
fine:
- all-stop mode, user issues 'stepi', stepping will be done in place
(as there's no breakpoint to step over). While stepping the thread
of interest all the other threads will be allowed to continue. A
newly created thread will be set running, and then stopped once the
thread of interest has completed its step.
- non-stop mode, user issues 'stepi', stepping will be done in place
(as there's no breakpoint to step over). Other threads might be
running or stopped, but as with the continue case above, the new
thread will be created running. The only possible issue here is
that the new thread will be left running after the initial thread
has completed its stepi. The user would need to manually select
the thread and interrupt it, this might not be what the user
expects. However, this is not something this commit tries to
change.
The problem then is what happens when we try to step over a clone
syscall if there is a breakpoint at the syscall address.
- For both all-stop and non-stop modes, with in-line stepping:
+ user issues 'stepi',
+ [non-stop mode only] GDB stops all threads. In all-stop mode all
threads are already stopped.
+ GDB removes s/w breakpoint at syscall address,
+ GDB single steps just the thread of interest, all other threads
are left stopped,
+ New thread is created running,
+ Initial thread completes its step,
+ [non-stop mode only] GDB resumes all threads that it previously
stopped.
There are two problems in the in-line stepping scenario above:
1. The new thread might pass through the same code that the initial
thread is in (i.e. the clone syscall code), in which case it will
fail to hit the breakpoint in clone as this was removed so the
first thread can single step,
2. The new thread might trigger some other stop event before the
initial thread reports its step completion. If this happens we
end up triggering an assertion as GDB assumes that only the
thread being stepped should stop. The assert looks like this:
infrun.c:5899: internal-error: int finish_step_over(execution_control_state*): Assertion `ecs->event_thread->control.trap_expected' failed.
- For both all-stop and non-stop modes, with displaced stepping:
+ user issues 'stepi',
+ GDB starts the displaced step, moves thread's PC to the
out-of-line scratch pad, maybe adjusts registers,
+ GDB single steps the thread of interest, [non-stop mode only] all
other threads are left as they were, either running or stopped.
In all-stop, all other threads are left stopped.
+ New thread is created running,
+ Initial thread completes its step, GDB re-adjusts its PC,
restores/releases scratchpad,
+ [non-stop mode only] GDB resumes the thread, now past its
breakpoint.
+ [all-stop mode only] GDB resumes all threads.
There is one problem with the displaced stepping scenario above:
3. When the parent thread completed its step, GDB adjusted its PC,
but did not adjust the child's PC, thus that new child thread
will continue execution in the scratch pad, invoking undefined
behavior. If you're lucky, you see a crash. If unlucky, the
inferior gets silently corrupted.
What is needed is for GDB to have more control over whether the new
thread is created running or not. Issue #1 above requires that the
new thread not be allowed to run until the breakpoint has been
reinserted. The only way to guarantee this is if the new thread is
held in a stopped state until the single step has completed. Issue #3
above requires that GDB is informed of when a thread clones itself,
and of what is the child's ptid, so that GDB can fixup both the parent
and the child.
When looking for solutions to this problem I considered how GDB
handles fork/vfork as these have some of the same issues. The main
difference between fork/vfork and clone is that the clone events are
not reported back to core GDB. Instead, the clone event is handled
automatically in the target code and the child thread is immediately
set running.
Note we have support for requesting thread creation events out of the
target (TARGET_WAITKIND_THREAD_CREATED). However, those are reported
for the new/child thread. That would be sufficient to address in-line
stepping (issue #1), but not for displaced-stepping (issue #3). To
handle displaced-stepping, we need an event that is reported to the
_parent_ of the clone, as the information about the displaced step is
associated with the clone parent. TARGET_WAITKIND_THREAD_CREATED
includes no indication of which thread is the parent that spawned the
new child. In fact, for some targets, like e.g., Windows, it would be
impossible to know which thread that was, as thread creation there
doesn't work by "cloning".
The solution implemented here is to model clone on fork/vfork, and
introduce a new TARGET_WAITKIND_THREAD_CLONED event. This event is
similar to TARGET_WAITKIND_FORKED and TARGET_WAITKIND_VFORKED, except
that we end up with a new thread in the same process, instead of a new
thread of a new process. Like FORKED and VFORKED, THREAD_CLONED
waitstatuses have a child_ptid property, and the child is help stopped
until GDB explicitly resumes it. This addresses the in-line stepping
case (issues #1 and #2).
The infrun code that handles displaced stepping fixup for the child
after a fork/vfork event is thus reused for THREAD_CLONE, with some
minimal conditions added, addressing the displaced stepping case
(issue #3).
The native Linux backend is adjusted to unconditionally report
TARGET_WAITKIND_THREAD_CLONED events to the core.
Following the follow_fork model in core GDB, we introduce a
target_follow_clone target method, which is responsible for making the
new clone child visible to the rest of GDB.
Subsequent patches will add clone events support to the remote
protocol and gdbserver.
A testcase will be added by a later patch.
displaced_step_in_progress_thread is removed in this patch, but it is
added back again in a subsequent patch. We need to do this because
the function is static, and with no callers, the compiler would warn,
(error with -Werror), breaking the build.
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=19675
Bug: https://sourceware.org/bugzilla/show_bug.cgi?id=27830
Change-Id: I474e9a7015dd3d33469e322a5764ae83f8a32787
---
gdb/infrun.c | 171 ++++++++++++-------------
gdb/linux-nat.c | 269 ++++++++++++++++++++++------------------
gdb/linux-nat.h | 2 +
gdb/target-delegates.c | 24 ++++
gdb/target.c | 8 ++
gdb/target.h | 2 +
gdb/target/waitstatus.c | 1 +
gdb/target/waitstatus.h | 20 ++-
8 files changed, 293 insertions(+), 204 deletions(-)
diff --git a/gdb/infrun.c b/gdb/infrun.c
index db2828628f7..9f09373c1db 100644
--- a/gdb/infrun.c
+++ b/gdb/infrun.c
@@ -1512,16 +1512,6 @@ step_over_info_valid_p (void)
displaced step operation on it. See displaced_step_prepare and
displaced_step_finish for details. */
-/* Return true if THREAD is doing a displaced step. */
-
-static bool
-displaced_step_in_progress_thread (thread_info *thread)
-{
- gdb_assert (thread != NULL);
-
- return thread->displaced_step_state.in_progress ();
-}
-
/* Return true if INF has a thread doing a displaced step. */
static bool
@@ -1829,6 +1819,31 @@ static displaced_step_finish_status
displaced_step_finish (thread_info *event_thread,
const target_waitstatus &event_status)
{
+ /* Check whether the parent is displaced stepping. */
+ struct regcache *regcache = get_thread_regcache (event_thread);
+ struct gdbarch *gdbarch = regcache->arch ();
+ inferior *parent_inf = event_thread->inf;
+
+ /* If this was a fork/vfork/clone, this event indicates that the
+ displaced stepping of the syscall instruction has been done, so
+ we perform cleanup for parent here. Also note that this
+ operation also cleans up the child for vfork, because their pages
+ are shared. */
+
+ /* If this is a fork (child gets its own address space copy) and
+ some displaced step buffers were in use at the time of the fork,
+ restore the displaced step buffer bytes in the child process.
+
+ Architectures which support displaced stepping and fork events
+ must supply an implementation of
+ gdbarch_displaced_step_restore_all_in_ptid. This is not enforced
+ during gdbarch validation to support architectures which support
+ displaced stepping but not forks. */
+ if (event_status.kind () == TARGET_WAITKIND_FORKED
+ && gdbarch_supports_displaced_stepping (gdbarch))
+ gdbarch_displaced_step_restore_all_in_ptid
+ (gdbarch, parent_inf, event_status.child_ptid ());
+
displaced_step_thread_state *displaced = &event_thread->displaced_step_state;
/* Was this thread performing a displaced step? */
@@ -1848,8 +1863,39 @@ displaced_step_finish (thread_info *event_thread,
/* Do the fixup, and release the resources acquired to do the displaced
step. */
- return gdbarch_displaced_step_finish (displaced->get_original_gdbarch (),
- event_thread, event_status);
+ displaced_step_finish_status status
+ = gdbarch_displaced_step_finish (displaced->get_original_gdbarch (),
+ event_thread, event_status);
+
+ if (event_status.kind () == TARGET_WAITKIND_FORKED
+ || event_status.kind () == TARGET_WAITKIND_VFORKED
+ || event_status.kind () == TARGET_WAITKIND_THREAD_CLONED)
+ {
+ /* Since the vfork/fork/clone syscall instruction was executed
+ in the scratchpad, the child's PC is also within the
+ scratchpad. Set the child's PC to the parent's PC value,
+ which has already been fixed up. Note: we use the parent's
+ aspace here, although we're touching the child, because the
+ child hasn't been added to the inferior list yet at this
+ point. */
+
+ struct regcache *child_regcache
+ = get_thread_arch_aspace_regcache (parent_inf->process_target (),
+ event_status.child_ptid (),
+ gdbarch,
+ parent_inf->aspace);
+ /* Read PC value of parent. */
+ CORE_ADDR parent_pc = regcache_read_pc (regcache);
+
+ displaced_debug_printf ("write child pc from %s to %s",
+ paddress (gdbarch,
+ regcache_read_pc (child_regcache)),
+ paddress (gdbarch, parent_pc));
+
+ regcache_write_pc (child_regcache, parent_pc);
+ }
+
+ return status;
}
/* Data to be passed around while handling an event. This data is
@@ -5013,8 +5059,6 @@ handle_one (const wait_one_event &event)
}
else
{
- struct regcache *regcache;
-
infrun_debug_printf
("target_wait %s, saving status for %s",
event.ws.to_string ().c_str (),
@@ -5032,7 +5076,7 @@ handle_one (const wait_one_event &event)
global_thread_step_over_chain_enqueue (t);
}
- regcache = get_thread_regcache (t);
+ struct regcache *regcache = get_thread_regcache (t);
t->set_stop_pc (regcache_read_pc (regcache));
infrun_debug_printf ("saved stop_pc=%s for %s "
@@ -5593,67 +5637,13 @@ handle_inferior_event (struct execution_control_state *ecs)
case TARGET_WAITKIND_FORKED:
case TARGET_WAITKIND_VFORKED:
- /* Check whether the inferior is displaced stepping. */
- {
- struct regcache *regcache = get_thread_regcache (ecs->event_thread);
- struct gdbarch *gdbarch = regcache->arch ();
- inferior *parent_inf = find_inferior_ptid (ecs->target, ecs->ptid);
-
- /* If this is a fork (child gets its own address space copy)
- and some displaced step buffers were in use at the time of
- the fork, restore the displaced step buffer bytes in the
- child process.
-
- Architectures which support displaced stepping and fork
- events must supply an implementation of
- gdbarch_displaced_step_restore_all_in_ptid. This is not
- enforced during gdbarch validation to support architectures
- which support displaced stepping but not forks. */
- if (ecs->ws.kind () == TARGET_WAITKIND_FORKED
- && gdbarch_supports_displaced_stepping (gdbarch))
- gdbarch_displaced_step_restore_all_in_ptid
- (gdbarch, parent_inf, ecs->ws.child_ptid ());
-
- /* If displaced stepping is supported, and thread ecs->ptid is
- displaced stepping. */
- if (displaced_step_in_progress_thread (ecs->event_thread))
- {
- struct regcache *child_regcache;
- CORE_ADDR parent_pc;
-
- /* GDB has got TARGET_WAITKIND_FORKED or TARGET_WAITKIND_VFORKED,
- indicating that the displaced stepping of syscall instruction
- has been done. Perform cleanup for parent process here. Note
- that this operation also cleans up the child process for vfork,
- because their pages are shared. */
- displaced_step_finish (ecs->event_thread, ecs->ws);
- /* Start a new step-over in another thread if there's one
- that needs it. */
- start_step_over ();
-
- /* Since the vfork/fork syscall instruction was executed in the scratchpad,
- the child's PC is also within the scratchpad. Set the child's PC
- to the parent's PC value, which has already been fixed up.
- FIXME: we use the parent's aspace here, although we're touching
- the child, because the child hasn't been added to the inferior
- list yet at this point. */
-
- child_regcache
- = get_thread_arch_aspace_regcache (parent_inf->process_target (),
- ecs->ws.child_ptid (),
- gdbarch,
- parent_inf->aspace);
- /* Read PC value of parent process. */
- parent_pc = regcache_read_pc (regcache);
-
- displaced_debug_printf ("write child pc from %s to %s",
- paddress (gdbarch,
- regcache_read_pc (child_regcache)),
- paddress (gdbarch, parent_pc));
-
- regcache_write_pc (child_regcache, parent_pc);
- }
- }
+ case TARGET_WAITKIND_THREAD_CLONED:
+
+ displaced_step_finish (ecs->event_thread, ecs->ws);
+
+ /* Start a new step-over in another thread if there's one that
+ needs it. */
+ start_step_over ();
context_switch (ecs);
@@ -5669,7 +5659,7 @@ handle_inferior_event (struct execution_control_state *ecs)
need to unpatch at follow/detach time instead to be certain
that new breakpoints added between catchpoint hit time and
vfork follow are detached. */
- if (ecs->ws.kind () != TARGET_WAITKIND_VFORKED)
+ if (ecs->ws.kind () == TARGET_WAITKIND_FORKED)
{
/* This won't actually modify the breakpoint list, but will
physically remove the breakpoints from the child. */
@@ -5701,14 +5691,24 @@ handle_inferior_event (struct execution_control_state *ecs)
if (!bpstat_causes_stop (ecs->event_thread->control.stop_bpstat))
{
bool follow_child
- = (follow_fork_mode_string == follow_fork_mode_child);
+ = (ecs->ws.kind () != TARGET_WAITKIND_THREAD_CLONED
+ && follow_fork_mode_string == follow_fork_mode_child);
ecs->event_thread->set_stop_signal (GDB_SIGNAL_0);
process_stratum_target *targ
= ecs->event_thread->inf->process_target ();
- bool should_resume = follow_fork ();
+ bool should_resume;
+ if (ecs->ws.kind () != TARGET_WAITKIND_THREAD_CLONED)
+ should_resume = follow_fork ();
+ else
+ {
+ should_resume = true;
+ inferior *inf = ecs->event_thread->inf;
+ inf->top_target ()->follow_clone (ecs->ws.child_ptid ());
+ ecs->event_thread->pending_follow.set_spurious ();
+ }
/* Note that one of these may be an invalid pointer,
depending on detach_fork. */
@@ -5719,16 +5719,21 @@ handle_inferior_event (struct execution_control_state *ecs)
child is marked stopped. */
/* If not resuming the parent, mark it stopped. */
- if (follow_child && !detach_fork && !non_stop && !sched_multi)
+ if (ecs->ws.kind () != TARGET_WAITKIND_THREAD_CLONED
+ && follow_child && !detach_fork && !non_stop && !sched_multi)
parent->set_running (false);
/* If resuming the child, mark it running. */
- if (follow_child || (!detach_fork && (non_stop || sched_multi)))
+ if (ecs->ws.kind () == TARGET_WAITKIND_THREAD_CLONED
+ || (follow_child || (!detach_fork && (non_stop || sched_multi))))
child->set_running (true);
/* In non-stop mode, also resume the other branch. */
- if (!detach_fork && (non_stop
- || (sched_multi && target_is_non_stop_p ())))
+ if ((ecs->ws.kind () == TARGET_WAITKIND_THREAD_CLONED
+ && target_is_non_stop_p ())
+ || (!detach_fork && (non_stop
+ || (sched_multi
+ && target_is_non_stop_p ()))))
{
if (follow_child)
switch_to_thread (parent);
diff --git a/gdb/linux-nat.c b/gdb/linux-nat.c
index e27cc890ff5..23d42b5fd55 100644
--- a/gdb/linux-nat.c
+++ b/gdb/linux-nat.c
@@ -1284,64 +1284,98 @@ get_detach_signal (struct lwp_info *lp)
return 0;
}
-/* Detach from LP. If SIGNO_P is non-NULL, then it points to the
- signal number that should be passed to the LWP when detaching.
- Otherwise pass any pending signal the LWP may have, if any. */
+/* Return true if WS is a fork, vfork or clone event. */
-static void
-detach_one_lwp (struct lwp_info *lp, int *signo_p)
+static bool
+is_fork_clone (const target_waitstatus &ws)
{
- int lwpid = lp->ptid.lwp ();
- int signo;
-
- gdb_assert (lp->status == 0 || WIFSTOPPED (lp->status));
+ return (ws.kind () == TARGET_WAITKIND_FORKED
+ || ws.kind () == TARGET_WAITKIND_VFORKED
+ || ws.kind () == TARGET_WAITKIND_THREAD_CLONED);
+}
- /* If the lwp/thread we are about to detach has a pending fork event,
- there is a process GDB is attached to that the core of GDB doesn't know
- about. Detach from it. */
+/* If LP has a pending fork/vfork/clone status, store it in WS and
+ return true. Otherwise, return false. */
+static bool
+get_pending_child_status (lwp_info *lp, target_waitstatus *ws)
+{
/* Check in lwp_info::status. */
if (WIFSTOPPED (lp->status) && linux_is_extended_waitstatus (lp->status))
{
int event = linux_ptrace_get_extended_event (lp->status);
- if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK)
+ if (event == PTRACE_EVENT_FORK
+ || event == PTRACE_EVENT_VFORK
+ || event == PTRACE_EVENT_CLONE)
{
unsigned long child_pid;
int ret = ptrace (PTRACE_GETEVENTMSG, lp->ptid.lwp (), 0, &child_pid);
if (ret == 0)
- detach_one_pid (child_pid, 0);
+ {
+ if (event == PTRACE_EVENT_FORK)
+ ws->set_forked (ptid_t (child_pid, child_pid));
+ else if (event == PTRACE_EVENT_VFORK)
+ ws->set_vforked (ptid_t (child_pid, child_pid));
+ else if (event == PTRACE_EVENT_CLONE)
+ ws->set_thread_cloned (ptid_t (lp->ptid.pid (), child_pid));
+ else
+ gdb_assert_not_reached ("unhandled");
+
+ return true;
+ }
else
- perror_warning_with_name (_("Failed to detach fork child"));
+ {
+ perror_warning_with_name (_("Failed to retrieve event msg"));
+ return false;
+ }
}
}
/* Check in lwp_info::waitstatus. */
- if (lp->waitstatus.kind () == TARGET_WAITKIND_VFORKED
- || lp->waitstatus.kind () == TARGET_WAITKIND_FORKED)
- detach_one_pid (lp->waitstatus.child_ptid ().pid (), 0);
+ if (is_fork_clone (lp->waitstatus))
+ {
+ *ws = lp->waitstatus;
+ return true;
+ }
+ thread_info *tp = find_thread_ptid (linux_target, lp->ptid);
/* Check in thread_info::pending_waitstatus. */
- thread_info *tp = find_thread_ptid (linux_target, lp->ptid);
- if (tp->has_pending_waitstatus ())
+ if (tp->has_pending_waitstatus ()
+ && is_fork_clone (tp->pending_waitstatus ()))
{
- const target_waitstatus &ws = tp->pending_waitstatus ();
-
- if (ws.kind () == TARGET_WAITKIND_VFORKED
- || ws.kind () == TARGET_WAITKIND_FORKED)
- detach_one_pid (ws.child_ptid ().pid (), 0);
+ *ws = tp->pending_waitstatus ();
+ return true;
}
/* Check in thread_info::pending_follow. */
- if (tp->pending_follow.kind () == TARGET_WAITKIND_VFORKED
- || tp->pending_follow.kind () == TARGET_WAITKIND_FORKED)
- detach_one_pid (tp->pending_follow.child_ptid ().pid (), 0);
+ if (is_fork_clone (tp->pending_follow))
+ {
+ *ws = tp->pending_follow;
+ return true;
+ }
- if (lp->status != 0)
- linux_nat_debug_printf ("Pending %s for %s on detach.",
- strsignal (WSTOPSIG (lp->status)),
- lp->ptid.to_string ().c_str ());
+ return false;
+}
+
+/* Detach from LP. If SIGNO_P is non-NULL, then it points to the
+ signal number that should be passed to the LWP when detaching.
+ Otherwise pass any pending signal the LWP may have, if any. */
+
+static void
+detach_one_lwp (struct lwp_info *lp, int *signo_p)
+{
+ int lwpid = lp->ptid.lwp ();
+ int signo;
+
+ /* If the lwp/thread we are about to detach has a pending fork/clone
+ event, there is a process/thread GDB is attached to that the core
+ of GDB doesn't know about. Detach from it. */
+
+ target_waitstatus ws;
+ if (get_pending_child_status (lp, &ws))
+ detach_one_pid (ws.child_ptid ().lwp (), 0);
/* If there is a pending SIGSTOP, get rid of it. */
if (lp->signalled)
@@ -1819,6 +1853,58 @@ linux_handle_syscall_trap (struct lwp_info *lp, int stopping)
return 1;
}
+void
+linux_nat_target::follow_clone (ptid_t child_ptid)
+{
+ linux_nat_debug_printf
+ ("Got clone event from LWP %ld, new child is LWP %ld",
+ inferior_ptid.lwp (), child_ptid.lwp ());
+
+ lwp_info *new_lp = add_lwp (child_ptid);
+ new_lp->stopped = 1;
+
+ /* If the thread_db layer is active, let it record the user
+ level thread id and status, and add the thread to GDB's
+ list. */
+ if (!thread_db_notice_clone (inferior_ptid, new_lp->ptid))
+ {
+ /* The process is not using thread_db. Add the LWP to
+ GDB's list. */
+ target_post_attach (new_lp->ptid.lwp ());
+ add_thread (linux_target, new_lp->ptid);
+ }
+
+ /* We just created NEW_LP so it cannot yet contain STATUS. */
+ gdb_assert (new_lp->status == 0);
+
+ if (!pull_pid_from_list (&stopped_pids, child_ptid.lwp (), &new_lp->status))
+ internal_error (__FILE__, __LINE__, _("no saved status for clone lwp"));
+
+ if (WSTOPSIG (new_lp->status) != SIGSTOP)
+ {
+ /* This can happen if someone starts sending signals to
+ the new thread before it gets a chance to run, which
+ have a lower number than SIGSTOP (e.g. SIGUSR1).
+ This is an unlikely case, and harder to handle for
+ fork / vfork than for clone, so we do not try - but
+ we handle it for clone events here. */
+
+ new_lp->signalled = 1;
+
+ /* Save the wait status to report later. */
+ linux_nat_debug_printf
+ ("waitpid of new LWP %ld, saving status %s",
+ (long) new_lp->ptid.lwp (), status_to_str (new_lp->status).c_str ());
+ }
+ else
+ {
+ new_lp->status = 0;
+
+ if (report_thread_events)
+ new_lp->waitstatus.set_thread_created ();
+ }
+}
+
/* Handle a GNU/Linux extended wait response. If we see a clone
event, we need to add the new LWP to our list (and not report the
trap to higher layers). This function returns non-zero if the
@@ -1861,11 +1947,9 @@ linux_handle_extended_wait (struct lwp_info *lp, int status)
_("wait returned unexpected status 0x%x"), status);
}
- ptid_t child_ptid (new_pid, new_pid);
-
if (event == PTRACE_EVENT_FORK || event == PTRACE_EVENT_VFORK)
{
- open_proc_mem_file (child_ptid);
+ open_proc_mem_file (ptid_t (new_pid, new_pid));
/* The arch-specific native code may need to know about new
forks even if those end up never mapped to an
@@ -1902,67 +1986,15 @@ linux_handle_extended_wait (struct lwp_info *lp, int status)
}
if (event == PTRACE_EVENT_FORK)
- ourstatus->set_forked (child_ptid);
+ ourstatus->set_forked (ptid_t (new_pid, new_pid));
else if (event == PTRACE_EVENT_VFORK)
- ourstatus->set_vforked (child_ptid);
+ ourstatus->set_vforked (ptid_t (new_pid, new_pid));
else if (event == PTRACE_EVENT_CLONE)
{
- struct lwp_info *new_lp;
-
- ourstatus->set_ignore ();
-
- linux_nat_debug_printf
- ("Got clone event from LWP %d, new child is LWP %ld", pid, new_pid);
-
- new_lp = add_lwp (ptid_t (lp->ptid.pid (), new_pid));
- new_lp->stopped = 1;
- new_lp->resumed = 1;
+ /* Save the status again, we'll use it in follow_clone. */
+ add_to_pid_list (&stopped_pids, new_pid, status);
- /* If the thread_db layer is active, let it record the user
- level thread id and status, and add the thread to GDB's
- list. */
- if (!thread_db_notice_clone (lp->ptid, new_lp->ptid))
- {
- /* The process is not using thread_db. Add the LWP to
- GDB's list. */
- target_post_attach (new_lp->ptid.lwp ());
- add_thread (linux_target, new_lp->ptid);
- }
-
- /* Even if we're stopping the thread for some reason
- internal to this module, from the perspective of infrun
- and the user/frontend, this new thread is running until
- it next reports a stop. */
- set_running (linux_target, new_lp->ptid, true);
- set_executing (linux_target, new_lp->ptid, true);
-
- if (WSTOPSIG (status) != SIGSTOP)
- {
- /* This can happen if someone starts sending signals to
- the new thread before it gets a chance to run, which
- have a lower number than SIGSTOP (e.g. SIGUSR1).
- This is an unlikely case, and harder to handle for
- fork / vfork than for clone, so we do not try - but
- we handle it for clone events here. */
-
- new_lp->signalled = 1;
-
- /* We created NEW_LP so it cannot yet contain STATUS. */
- gdb_assert (new_lp->status == 0);
-
- /* Save the wait status to report later. */
- linux_nat_debug_printf
- ("waitpid of new LWP %ld, saving status %s",
- (long) new_lp->ptid.lwp (), status_to_str (status).c_str ());
- new_lp->status = status;
- }
- else if (report_thread_events)
- {
- new_lp->waitstatus.set_thread_created ();
- new_lp->status = status;
- }
-
- return 1;
+ ourstatus->set_thread_cloned (ptid_t (lp->ptid.pid (), new_pid));
}
return 0;
@@ -3538,59 +3570,56 @@ kill_wait_callback (struct lwp_info *lp)
return 0;
}
-/* Kill the fork children of any threads of inferior INF that are
- stopped at a fork event. */
+/* Kill the fork/clone child of LP if it has an unfollowed child. */
-static void
-kill_unfollowed_fork_children (struct inferior *inf)
+static int
+kill_unfollowed_child_callback (lwp_info *lp)
{
- for (thread_info *thread : inf->non_exited_threads ())
+ target_waitstatus ws;
+ if (get_pending_child_status (lp, &ws))
{
- struct target_waitstatus *ws = &thread->pending_follow;
-
- if (ws->kind () == TARGET_WAITKIND_FORKED
- || ws->kind () == TARGET_WAITKIND_VFORKED)
- {
- ptid_t child_ptid = ws->child_ptid ();
- int child_pid = child_ptid.pid ();
- int child_lwp = child_ptid.lwp ();
+ ptid_t child_ptid = ws.child_ptid ();
+ int child_pid = child_ptid.pid ();
+ int child_lwp = child_ptid.lwp ();
- kill_one_lwp (child_lwp);
- kill_wait_one_lwp (child_lwp);
+ kill_one_lwp (child_lwp);
+ kill_wait_one_lwp (child_lwp);
- /* Let the arch-specific native code know this process is
- gone. */
- linux_target->low_forget_process (child_pid);
- }
+ /* Let the arch-specific native code know this process is
+ gone. */
+ if (ws.kind () != TARGET_WAITKIND_THREAD_CLONED)
+ linux_target->low_forget_process (child_pid);
}
+
+ return 0;
}
void
linux_nat_target::kill ()
{
- /* If we're stopped while forking and we haven't followed yet,
- kill the other task. We need to do this first because the
+ ptid_t pid_ptid (inferior_ptid.pid ());
+
+ /* If we're stopped while forking/cloning and we haven't followed
+ yet, kill the child task. We need to do this first because the
parent will be sleeping if this is a vfork. */
- kill_unfollowed_fork_children (current_inferior ());
+ iterate_over_lwps (pid_ptid, kill_unfollowed_child_callback);
if (forks_exist_p ())
linux_fork_killall ();
else
{
- ptid_t ptid = ptid_t (inferior_ptid.pid ());
-
/* Stop all threads before killing them, since ptrace requires
that the thread is stopped to successfully PTRACE_KILL. */
- iterate_over_lwps (ptid, stop_callback);
+ iterate_over_lwps (pid_ptid, stop_callback);
/* ... and wait until all of them have reported back that
they're no longer running. */
- iterate_over_lwps (ptid, stop_wait_callback);
+ iterate_over_lwps (pid_ptid, stop_wait_callback);
/* Kill all LWP's ... */
- iterate_over_lwps (ptid, kill_callback);
+ iterate_over_lwps (pid_ptid, kill_callback);
/* ... and wait until we've flushed all events. */
- iterate_over_lwps (ptid, kill_wait_callback);
+ iterate_over_lwps (pid_ptid, kill_wait_callback);
}
target_mourn_inferior (inferior_ptid);
diff --git a/gdb/linux-nat.h b/gdb/linux-nat.h
index 11043c4b9f6..683173dbd38 100644
--- a/gdb/linux-nat.h
+++ b/gdb/linux-nat.h
@@ -129,6 +129,8 @@ class linux_nat_target : public inf_ptrace_target
void follow_fork (inferior *, ptid_t, target_waitkind, bool, bool) override;
+ void follow_clone (ptid_t) override;
+
std::vector<static_tracepoint_marker>
static_tracepoint_markers_by_strid (const char *id) override;
diff --git a/gdb/target-delegates.c b/gdb/target-delegates.c
index 8a9986454dd..f58fbe44094 100644
--- a/gdb/target-delegates.c
+++ b/gdb/target-delegates.c
@@ -76,6 +76,7 @@ struct dummy_target : public target_ops
int insert_vfork_catchpoint (int arg0) override;
int remove_vfork_catchpoint (int arg0) override;
void follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bool arg3, bool arg4) override;
+ void follow_clone (ptid_t arg0) override;
int insert_exec_catchpoint (int arg0) override;
int remove_exec_catchpoint (int arg0) override;
void follow_exec (inferior *arg0, ptid_t arg1, const char *arg2) override;
@@ -250,6 +251,7 @@ struct debug_target : public target_ops
int insert_vfork_catchpoint (int arg0) override;
int remove_vfork_catchpoint (int arg0) override;
void follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bool arg3, bool arg4) override;
+ void follow_clone (ptid_t arg0) override;
int insert_exec_catchpoint (int arg0) override;
int remove_exec_catchpoint (int arg0) override;
void follow_exec (inferior *arg0, ptid_t arg1, const char *arg2) override;
@@ -1545,6 +1547,28 @@ debug_target::follow_fork (inferior *arg0, ptid_t arg1, target_waitkind arg2, bo
gdb_puts (")\n", gdb_stdlog);
}
+void
+target_ops::follow_clone (ptid_t arg0)
+{
+ this->beneath ()->follow_clone (arg0);
+}
+
+void
+dummy_target::follow_clone (ptid_t arg0)
+{
+ default_follow_clone (this, arg0);
+}
+
+void
+debug_target::follow_clone (ptid_t arg0)
+{
+ gdb_printf (gdb_stdlog, "-> %s->follow_clone (...)\n", this->beneath ()->shortname ());
+ this->beneath ()->follow_clone (arg0);
+ gdb_printf (gdb_stdlog, "<- %s->follow_clone (", this->beneath ()->shortname ());
+ target_debug_print_ptid_t (arg0);
+ gdb_puts (")\n", gdb_stdlog);
+}
+
int
target_ops::insert_exec_catchpoint (int arg0)
{
diff --git a/gdb/target.c b/gdb/target.c
index 18e53aa5d27..d1ba229189f 100644
--- a/gdb/target.c
+++ b/gdb/target.c
@@ -2717,6 +2717,14 @@ default_follow_fork (struct target_ops *self, inferior *child_inf,
_("could not find a target to follow fork"));
}
+static void
+default_follow_clone (struct target_ops *self, ptid_t child_ptid)
+{
+ /* Some target returned a clone event, but did not know how to follow it. */
+ internal_error (__FILE__, __LINE__,
+ _("could not find a target to follow clone"));
+}
+
/* See target.h. */
void
diff --git a/gdb/target.h b/gdb/target.h
index 18559feef89..1cab47147e3 100644
--- a/gdb/target.h
+++ b/gdb/target.h
@@ -636,6 +636,8 @@ struct target_ops
TARGET_DEFAULT_RETURN (1);
virtual void follow_fork (inferior *, ptid_t, target_waitkind, bool, bool)
TARGET_DEFAULT_FUNC (default_follow_fork);
+ virtual void follow_clone (ptid_t)
+ TARGET_DEFAULT_FUNC (default_follow_clone);
virtual int insert_exec_catchpoint (int)
TARGET_DEFAULT_RETURN (1);
virtual int remove_exec_catchpoint (int)
diff --git a/gdb/target/waitstatus.c b/gdb/target/waitstatus.c
index ef432bb629d..3e45e4f32fa 100644
--- a/gdb/target/waitstatus.c
+++ b/gdb/target/waitstatus.c
@@ -45,6 +45,7 @@ DIAGNOSTIC_ERROR_SWITCH
case TARGET_WAITKIND_FORKED:
case TARGET_WAITKIND_VFORKED:
+ case TARGET_WAITKIND_THREAD_CLONED:
return string_appendf (str, ", child_ptid = %s",
this->child_ptid ().to_string ().c_str ());
diff --git a/gdb/target/waitstatus.h b/gdb/target/waitstatus.h
index 63bbd737749..5dcdbc8fe09 100644
--- a/gdb/target/waitstatus.h
+++ b/gdb/target/waitstatus.h
@@ -95,6 +95,13 @@ enum target_waitkind
/* There are no resumed children left in the program. */
TARGET_WAITKIND_NO_RESUMED,
+ /* The thread was cloned. The event's ptid corresponds to the
+ cloned parent. The cloned child is held stopped at its entry
+ point, and its ptid is in the event's m_child_ptid. The target
+ must not add the cloned child to GDB's thread list until
+ target_ops::follow_clone() is called. */
+ TARGET_WAITKIND_THREAD_CLONED,
+
/* The thread was created. */
TARGET_WAITKIND_THREAD_CREATED,
@@ -125,6 +132,8 @@ DIAGNOSTIC_ERROR_SWITCH
return "FORKED";
case TARGET_WAITKIND_VFORKED:
return "VFORKED";
+ case TARGET_WAITKIND_THREAD_CLONED:
+ return "THREAD_CLONED";
case TARGET_WAITKIND_EXECD:
return "EXECD";
case TARGET_WAITKIND_VFORK_DONE:
@@ -325,6 +334,14 @@ struct target_waitstatus
return *this;
}
+ target_waitstatus &set_thread_cloned (ptid_t child_ptid)
+ {
+ this->reset ();
+ m_kind = TARGET_WAITKIND_THREAD_CLONED;
+ m_value.child_ptid = child_ptid;
+ return *this;
+ }
+
target_waitstatus &set_thread_created ()
{
this->reset ();
@@ -370,7 +387,8 @@ struct target_waitstatus
ptid_t child_ptid () const
{
gdb_assert (m_kind == TARGET_WAITKIND_FORKED
- || m_kind == TARGET_WAITKIND_VFORKED);
+ || m_kind == TARGET_WAITKIND_VFORKED
+ || m_kind == TARGET_WAITKIND_THREAD_CLONED);
return m_value.child_ptid;
}
--
2.36.0
More information about the Gdb-patches
mailing list