This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RFC: fix race in multiexec case


While testing my MI multiexec support patches, I've got GDB to crash. What happened
is that:

- inferior 1 is run
- MI switches to inferior 2, which is never run. inferior_ptid gets set to
  null_ptid
- MI tries to run inferior 2
- GDB noticed gets an even in inferior 1
- handle_inferior_event calls get_current_regcache()
- get_current_regcache() calls get_thread_regcache (inferior_ptid),
  and inferior_ptid is still null_ptid
- get_thread_regcache indirectly calls linux_nat_thread_address_space,
and it has a code like this:

  if (GET_LWP (ptid) == 0)
    {
      ...
      lwp = find_lwp_pid (ptid);
      pid = GET_PID (lwp->ptid);
    }

However, find_lwp_pid returns NULL for null_ptid, and this code segfaults.
I attach a minimal patch that appears to fix this, but I feel uneasy about it.
Maybe, inferior_ptid should be reset much earlier?

Thanks,
Volodya


diff --git a/gdb/infrun.c b/gdb/infrun.c
index d8ca40d..300af62 100644
--- a/gdb/infrun.c
+++ b/gdb/infrun.c
@@ -3232,7 +3232,8 @@ targets should add new threads to the thread list themselves in non-stop mode.")
   if (ecs->event_thread->stop_signal == TARGET_SIGNAL_TRAP)
     {
       int thread_hop_needed = 0;
-      struct address_space *aspace = get_regcache_aspace (get_current_regcache ());
+      struct address_space *aspace =
+       get_regcache_aspace (get_thread_regcache (ecs->ptid));

       /* Check if a regular breakpoint has been hit before checking
          for a potential single step breakpoint. Otherwise, GDB will


- Volodya


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]