Bug 31832 - [gdb] FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 3: attach (timeout)
Summary: [gdb] FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 3: attach (...
Status: NEW
Alias: None
Product: gdb
Classification: Unclassified
Component: gdb (show other bugs)
Version: HEAD
: P2 normal
Target Milestone: ---
Assignee: Not yet assigned to anyone
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2024-06-01 07:17 UTC by Tom de Vries
Modified: 2024-06-21 10:18 UTC (History)
3 users (show)

See Also:
Host:
Target:
Build:
Last reconfirmed:


Attachments
gdb.log (1.85 KB, text/x-log)
2024-06-01 07:17 UTC, Tom de Vries
Details
gdb.log (23.95 KB, text/x-log)
2024-06-03 16:58 UTC, Tom de Vries
Details
gdb.log (with statistics-gathering patch) (2.80 KB, text/x-log)
2024-06-12 09:53 UTC, Tom de Vries
Details
Tom's patch with statistics, plus a few more. (1.97 KB, patch)
2024-06-21 01:21 UTC, Thiago Jung Bauermann
Details | Diff

Note You need to log in before you can comment on or make changes to this bug.
Description Tom de Vries 2024-06-01 07:17:16 UTC
Created attachment 15551 [details]
gdb.log

I build gdb at current trunk, commit 4635b42ac78 on ppc64le-linux, specifically cfarm120.

I ran into:
...
(gdb) file /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads^M
Reading symbols from /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads...^M
(gdb) builtin_spawn /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads^M
attach 359140^M
Attaching to program: /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads, process 359140^M
Cannot attach to lwp 829202: Operation not permitted (1)^M
(gdb) XFAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 1: attach (EPERM)
attach 359140^M
Attaching to program: /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads, process 359140^M
Cannot attach to lwp 1059999: Operation not permitted (1)^M
(gdb) XFAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 2: attach (EPERM)
attach 359140^M
Attaching to program: /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads, process 359140^M
FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 3: attach (timeout)
...
Comment 1 Tom de Vries 2024-06-01 07:23:16 UTC
At first glance, this is a resurfacing of PR31312.
Comment 2 Tom de Vries 2024-06-01 08:10:51 UTC
I reran the test-case, and got:
...
(gdb) builtin_spawn /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads^M
attach 416753^M
Attaching to program: /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads, process 416753^M
FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 1: attach (timeout)
info threads^M
Cannot attach to lwp 1620134: Operation not permitted (1)^M
(gdb) info threads^M
...

So, this is reproducible.  In this case the timeout happens at the initial attach, that may be relevant information.

For context, this is on a machine with 192 cores, and when starting the test-case the load is somewhere below 10.  In the course of running the test-case this gradually climbs to 130, but it doesn't look like the timeout is caused by system load, given that it's low to start with.
Comment 3 Bernd Edlinger 2024-06-02 12:36:08 UTC
I'm not sure if this is the same issue that I saw recently on my PC.
Can you please try this patch, and see if it solves the issue?

https://sourceware.org/pipermail/gdb-patches/2024-April/207898.html
Comment 4 Tom de Vries 2024-06-03 16:58:44 UTC
Created attachment 15558 [details]
gdb.log

(In reply to Bernd Edlinger from comment #3)
> I'm not sure if this is the same issue that I saw recently on my PC.
> Can you please try this patch, and see if it solves the issue?
> 
> https://sourceware.org/pipermail/gdb-patches/2024-April/207898.html

This is the result with that patch applied, doesn't look like an improvement.
Comment 5 Thiago Jung Bauermann 2024-06-07 00:13:20 UTC
In bug #31312, the testcase timed out because when there was one or more zombie threads in the inferior, it would enter an infinite loop while processing all new threads because at each iteration it thought that the zombie threads were new. Because of the infinite loop, the GDB prompt would never show up in gdb.log.

In this case, the testcase times out during attach but eventually the GDB prompt appears and GDB seems to work. Apparently the attach process just takes a long time.

I haven't seen this behaviour when I was investigating bug #31312. I don't understand what could cause attach to take so long but still complete.

The only thing I can think of is that perhaps find() and insert() operations in the visited_lwps std::unordered_set added by commit c930a077225e ("gdb/nat/linux: Fix attaching to process when it has zombie threads") are taking too long?
Comment 6 Tom de Vries 2024-06-12 09:53:13 UTC
Created attachment 15582 [details]
gdb.log (with statistics-gathering patch)

I applied the following statistics-gathering patch:
...
diff --git a/gdb/nat/linux-procfs.c b/gdb/nat/linux-procfs.c
index 5efc86cca1f..f6847257b85 100644
--- a/gdb/nat/linux-procfs.c
+++ b/gdb/nat/linux-procfs.c
@@ -22,6 +22,7 @@
 #include <sys/stat.h>
 #include <unordered_set>
 #include <utility>
+#include "gdbsupport/scope-exit.h"
 
 /* Return the TGID of LWPID from /proc/pid/status.  Returns -1 if not
    found.  */
@@ -377,17 +378,42 @@ linux_proc_attach_tgid_threads (pid_t pid,
      threads, new threads may be spawned.  Cycle through the list of
      threads until we have done two iterations without finding new
      threads.  */
+
+  unsigned long long total_iterations = 0;
+  unsigned long long dir_entries = 0;
+  unsigned long long no_lwp = 0;
+  unsigned long long lookup = 0;
+  unsigned long long skipped = 0;
+  unsigned long long insert = 0;
+  unsigned long long attach = 0;
+  unsigned long long start_over = 0;
+  SCOPE_EXIT
+    {
+      printf ("total_iterations: %llu\n", total_iterations);
+      printf ("dir_entries: %llu\n", dir_entries);
+      printf ("no_lwp: %llu\n", no_lwp);
+      printf ("lookup: %llu\n", lookup);
+      printf ("skipped: %llu\n", skipped);
+      printf ("insert: %llu\n", insert);
+      printf ("attach: %llu\n", attach);
+      printf ("start_over: %llu\n", start_over);
+    };
+
   for (iterations = 0; iterations < 2; iterations++)
     {
+      total_iterations++;
       struct dirent *dp;
 
       new_threads_found = 0;
       while ((dp = readdir (dir.get ())) != NULL)
        {
+         dir_entries++;
          unsigned long lwp;
 
          /* Fetch one lwp.  */
          lwp = strtoul (dp->d_name, NULL, 10);
+         if (lwp == 0)
+           no_lwp++;
          if (lwp != 0)
            {
              ptid_t ptid = ptid_t (pid, lwp);
@@ -399,12 +425,18 @@ linux_proc_attach_tgid_threads (pid_t pid,
                  std::pair<unsigned long, ULONGEST> key (lwp, *starttime);
 
                  /* If we already visited this LWP, skip it this time.  */
+                 lookup++;
                  if (visited_lwps.find (key) != visited_lwps.cend ())
-                   continue;
+                   {
+                     skipped++;
+                     continue;
+                   }
 
+                 insert++;
                  visited_lwps.insert (key);
                }
 
+             attach++;
              if (attach_lwp (ptid))
                new_threads_found = 1;
            }
@@ -414,6 +446,7 @@ linux_proc_attach_tgid_threads (pid_t pid,
        {
          /* Start over.  */
          iterations = -1;
+         start_over++;
        }
 
       rewinddir (dir.get ());
...
and ran the test-case.

The first iteration gives us:
...
(gdb) builtin_spawn /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads^M
attach 1301317^M
Attaching to program: /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-lived-threads/attach-many-short-lived-threads, process 1301317^M
FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 1: attach (timeout)
total_iterations: 2092^M
dir_entries: 594518^M
no_lwp: 4412^M
lookup: 119037^M
skipped: 118355^M
insert: 682^M
attach: 471751^M
start_over: 2091^M
Cannot attach to lwp 2340832: Operation not permitted (1)^M
...

I'm not sure what this means, but I do notice the big difference between dir_entries and lookup.  So only 20% of the time we find the starttime and can use the cache.
Comment 7 Thiago Jung Bauermann 2024-06-21 01:21:34 UTC
Created attachment 15586 [details]
Tom's patch with statistics, plus a few more.

(In reply to Tom de Vries from comment #6)
> Created attachment 15582 [details]
> gdb.log (with statistics-gathering patch)
>
> I applied the following statistics-gathering patch:
> ...
> ...

Great idea. I applied your patch, with just some additional statistics:

- no_starttime, when starttime.has_value () is false (can be calculated
  from others, but I wanted to see it easily)
- no_new_thread_found, when attach_lwp (ptid) returns false
- one counter for every reason that starttime can't be obtained

And ran it on an aarch64 machine with 160 cores.

> and ran the test-case.
>
> The first iteration gives us:
> ...
> (gdb) builtin_spawn
> /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-
> lived-threads/attach-many-short-lived-threads^M
> attach 1301317^M
> Attaching to program:
> /home/vries/gdb/build/gdb/testsuite/outputs/gdb.threads/attach-many-short-
> lived-threads/attach-many-short-lived-threads, process 1301317^M
> FAIL: gdb.threads/attach-many-short-lived-threads.exp: iter 1: attach
> (timeout)
> total_iterations: 2092^M

Wow. On my test aarch64 system, the highest I've seen is 177. Here are the
numbers for that run:

total_iterations: 177
dir_entries: 32930
no_lwp: 365
lookup: 21822
skipped: 21259
insert: 563
attach: 11306
start_over: 176
no_starttime: 10743
no_new_thread_found: 1
stat: cant_open_file: 10735
stat: empty_file: 8
stat: no_parens: 0
stat: no_separator: 0
stat: no_field_beginning: 0
stat: invalid_starttime: 0
stat: unexpected_chars: 0

My machine has many cores but it's an older CPU model (Neoverse N1). These
numbers show that the POWER10 system has a much higher capacity to churn
out new threads than my system (no surprise there).  My understanding is
that GDB is overwhelmed by the constant stream of newly spawned threads
and takes a while to attach to all of them.

As Pedro mentioned elsewhere¹, Linux doesn't provide a way for GDB to stop
all of a process' threads, or cause new ones to spawn in a
"ptrace-stopped" state. Without such mechanism, the only way I can see of
addressing this problem is by making GDB parallelize the job of attaching
to all inferior threads using its worker threads — i.e., fight fire with
fire. :)

That wouldn't be a trivial change though. IIUC it would mean that
different inferior threads would have different tracers (the various GDB
worker threads), and GDB would need to take care to use the correct worker
thread to send ptrace commands to each inferior thread.

Another approach would be to see if there's a way to make
attach_proc_task_lwp_callback () faster, but from reading the code it
doesn't look like there's anything too slow there — except perhaps the
call to linux_proc_pid_is_gone (), which reads /proc/$LWP/status. Though
even that would be just mitigation since the fundamental limitation would
still be there.

Alternatively, (considering that the testcase is contrived) can the
testcase increase the timeout proportionally to the number of CPUs on the
system?

> dir_entries: 594518^M
> no_lwp: 4412^M
> lookup: 119037^M
> skipped: 118355^M
> insert: 682^M
> attach: 471751^M
> start_over: 2091^M
> Cannot attach to lwp 2340832: Operation not permitted (1)^M
> ...
>
> I'm not sure what this means, but I do notice the big difference between
> dir_entries and lookup.  So only 20% of the time we find the starttime and
> can use the cache.

I thought that not being able to read starttime from /proc meant that the
thread was gone. But from the statistics I pasted above, in about 34% of
the time GDB didn't find the starttime and still was able to attach to all
but one of the new threads. My understanding is that there's a race
condition between GDB and the Linux kernel when reading the stat file for
a newly created thread.

This is harmless though: if starttime can't be obtained, GDB will try to
attach to the thread anyway.

On the bright side, this means that the problem isn't with the
std::unordered_set (as I was fearing could be the case). :)

The statistics on why GDB can't get starttime are also interesting: at
least on my system, it turns out that almost all of the time it's because
GDB can't open /proc/$PID/task/$LWP/stat. The only other reason (on the
order of 1%-2% of the cases) it's because the stat file is empty. No other
early return was taken in my experiments.

--
¹ https://inbox.sourceware.org/gdb-patches/9680e3cf-b8ad-4329-a51c-2aafb98d9476@palves.net/
Comment 8 Tom de Vries 2024-06-21 10:18:40 UTC
(In reply to Thiago Jung Bauermann from comment #7)
> My machine has many cores but it's an older CPU model (Neoverse N1). These
> numbers show that the POWER10 system has a much higher capacity to churn
> out new threads than my system (no surprise there).  My understanding is
> that GDB is overwhelmed by the constant stream of newly spawned threads
> and takes a while to attach to all of them.
> 

Agreed.

[ FYI, one particular thing about my setup is that I build at -O0.  I just tried at -O2, but I still run into this problem. ]

> As Pedro mentioned elsewhere¹, Linux doesn't provide a way for GDB to stop
> all of a process' threads, or cause new ones to spawn in a
> "ptrace-stopped" state. Without such mechanism, the only way I can see of
> addressing this problem is by making GDB parallelize the job of attaching
> to all inferior threads using its worker threads — i.e., fight fire with
> fire. :)
> 
> That wouldn't be a trivial change though. IIUC it would mean that
> different inferior threads would have different tracers (the various GDB
> worker threads), and GDB would need to take care to use the correct worker
> thread to send ptrace commands to each inferior thread.
> 

One approach could be to have the gdb main thread do only the ptrace bit and offload the rest of the loop in another thread.  But I'm not sure if that actually addresses the bottleneck.

> Another approach would be to see if there's a way to make
> attach_proc_task_lwp_callback () faster, but from reading the code it
> doesn't look like there's anything too slow there — except perhaps the
> call to linux_proc_pid_is_gone (), which reads /proc/$LWP/status. Though
> even that would be just mitigation since the fundamental limitation would
> still be there.
> 

I've played around a bit with this for half a day or so, but didn't get anywhere.

> Alternatively, (considering that the testcase is contrived) can the
> testcase increase the timeout proportionally to the number of CPUs on the
> system?
> 

Or conversely, put a proportional limit on the number threads in the test-case.

> > dir_entries: 594518^M
> > no_lwp: 4412^M
> > lookup: 119037^M
> > skipped: 118355^M
> > insert: 682^M
> > attach: 471751^M
> > start_over: 2091^M
> > Cannot attach to lwp 2340832: Operation not permitted (1)^M
> > ...
> >
> > I'm not sure what this means, but I do notice the big difference between
> > dir_entries and lookup.  So only 20% of the time we find the starttime and
> > can use the cache.
> 
> I thought that not being able to read starttime from /proc meant that the
> thread was gone. But from the statistics I pasted above, in about 34% of
> the time GDB didn't find the starttime and still was able to attach to all
> but one of the new threads. My understanding is that there's a race
> condition between GDB and the Linux kernel when reading the stat file for
> a newly created thread.
> 
> This is harmless though: if starttime can't be obtained, GDB will try to
> attach to the thread anyway.
> 

Agreed, it's harmless (though slow of course).