This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: why I dislike qXfer


> From: Pedro Alves [mailto:palves@redhat.com]
> Sent: Monday, June 13, 2016 2:36 PM
> To: taylor, david; gdb@sourceware.org
> Subject: Re: why I dislike qXfer
> 
> On 06/13/2016 07:15 PM, David Taylor wrote:
> 
> > With the qT{f,s}{STM,P,V} q{f,s}ThreadInfo (and possibly others)
> > interfaces, nothing needs to be precomputed, and I either start at the
> > beginning (f -- first) or where the previous request left off (s --
> > subsequent).
> 
> > I have to store, per connection, my location.  But, there is no random
> > reading.  The next request of that flavor will either start at the
> > beginning (f) or where the last one left off (s).  Reads are sequential.
> 
> If you support non-stop mode, the target is running and list of threads
> changes as gdb is iterating.  The "location" thread can exit and you're left not
> knowing where to continue from, for example.  To get around that, generate
> a stable snapshot when you get the f requests, and serve gdb requests from
> that snapshot.

We are non-stop.  The "location" thread exiting would not be a problem.

Each request, whether first or subsequent would send one or more complete
thread entries.  When sending a reply, you know where in the process
table to start, you skip dead threads, and you fill entries until
after doing the XML escaping and the GDB escaping an additional complete
entry will not fit.  You record where you stopped -- where to resume.

We allow an arbitrary number of GDBs to connect to the GDB stub running
in the OS kernel -- each connection gets a dedicated thread.

Currently, we support 320 threads.  This might well increase in the
future.  With thread name and everything else I want to send back at the
maximum (because that reflects how much space I might need under the
offset & length scheme), I calculate 113 bytes per thread (this counts
<thread> and </thread>) to send back -- before escaping.

So, if I 'snapshot' everything every time I get a packet with an offset of 0,
the buffer would need to be over 32K bytes in size.  I don't want to
increase the GDB stub stack size by this much.  So, that mens either
limiting the number of connections (fixed, pre-allocated buffers) or
using kernel equivalents of malloc and free (which is discouraged) or
coming up with a different approach -- e.g., avoiding the need for the
buffer...

So, in terms of saved state, with the snapshot it is 35-36K bytes, with the
process table index it is 2-8 bytes.

It's too late now, but I would much prefer interfaces something like:

either
    qfXfer:object:read:annex:length
    qsXfer:object:read:annex:length
or
    qfXfer:object:read:annex
    qsXfer:object:read:annex

[If the :length wasn't part of the spec, then send as much
as you want so long as you stay within the maximum packet size.  My
preference would be to leave off the length, but I'd be happy either way.]

David


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]