This is the mail archive of the gdb@sources.redhat.com mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: why is gdb 5.2 so slow



Both.  Things we do wrong:
- GDB can't handle being told that just one thread is stopped.  If we
could, then we wouldn't have to stop all threads for shared library
events; there's a mutex in the system library so we don't even have to
worry about someone hitting the breakpoint.  We could also use this to
save time on conditional breakpoints; if we aren't stopping, why stop
all other threads?
[my guess] If the condition fails, we need to thread-hop. If the condition succeeds we need to stop all threads anyway.

Knowing that shlibs are wrapped in a mutex is definitly something to exploit.

- Removing all breakpoints, that's just wrong, there's a test in
signals.exp (xfailed :P) which shows why.  We should _only_ be removing
any breakpoints at the address we're hopping over.

- No memory cache by default.  thread_db spends a LOT of time reading
from the inferior.
Based on a verbal description I was given, I believe that the current dcache model is slightly wrong. It should behave more like the regcache vis:
- ask for one register, get back the register file
hence:
- ask for one byte, get back one page, OR
- ask for one byte, mmap the entire target process address space
That way the target decides.

HP, long ago, was proposing zero copy target memory accesses.

- No ptrace READDATA request for most Linux targets to read a large
chunk.  I keep submitting patches for some other ptrace cleanups that
will let me add this one to the kernel, and they keep hitting a blank
wall.  I may start maintaining 2.4 patches publicly and see if people
use them!
Uli (glibc), KevinB, MichaelS, and I happened to be in the same room and talked about this. /procfs was suggested as an alternative path. For ptrace() Uli indicated something about running out of register arguments to use across a system call.

- Too many calls to thread_db in the LinuxThreads case.  It's a nice
generic layer but implemented such that the genericity (? :P) comes
with a severe cost in performance.  We need most of the layer; I've
seen the NGPT support patch for GDB, and it's very simple, precisely
because of this layer.  But we could do staggeringly better if we just
had a guarantee that there was a one-to-one, unchanging LWP<->thread
correspondence (no userspace scheduling etc.).  Both LinuxThreads and
the new NPTL library have this property.  Then we don't need to use
thread_db to access the inferior at all, only to collect new thread
information.
Apparently that guarentee is comming. Solaris, for instance, is moving back to 1:1. My instinct is that reduceing the system calls will make a far greater improvement than trimming back glibc.

Want a sample of how much difference this last one makes?  In
combination with a bit of my first bullet above, that means we don't
have to stop all threads at a new thread event?  Use gdbserver instead
of GDB.  Its completely from-scratch threads support does not work with
NGPT or any other N:M threading library, but for N:N it is drastically
faster.  The spot that's still slowest is shared library events,
because we can't report that just that thread stopped and ask if we
should stop others (or better, be told by GDB that the breakpoint at
that address is a don't-stop-all-threads breakpoint).

That's just off the top of my head.  I think there are a few more.


On a bright note, I've also been told that a future Linux Kernel is going to support a stop all threads primative so that at least some of the above stupidity can be eliminated.

Some "future"...  I've seen the code in question, I think; it's nice
but no one has had the time to push it properly, so it won't be until
2.7 at the earliest, I'd say.
Andrew



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]