This is the mail archive of the gdb@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

RE: why I dislike qXfer


> From: Pedro Alves [mailto:palves@redhat.com]
> Sent: Thursday, June 16, 2016 2:25 PM
> To: taylor, david; gdb@sourceware.org
> Subject: Re: why I dislike qXfer
> 
> On 06/16/2016 06:42 PM, taylor, david wrote:
> >
> >> From: Pedro Alves [mailto:palves@redhat.com]
> 
> >
> > We allow an arbitrary number of GDBs to connect to the GDB stub
> > running in the OS kernel -- each connection gets a dedicated thread.
> >
> > Currently, we support 320 threads.  This might well increase in the
> > future.  With thread name and everything else I want to send back at
> > the maximum (because that reflects how much space I might need under
> > the offset & length scheme), I calculate 113 bytes per thread (this
> > counts <thread> and </thread>) to send back -- before escaping.
> >
> > So, if I 'snapshot' everything every time I get a packet with an
> > offset of 0, the buffer would need to be over 32K bytes in size.  I
> > don't want to increase the GDB stub stack size by this much.  So, that
> > mens either limiting the number of connections (fixed, pre-allocated
> > buffers) or using kernel equivalents of malloc and free (which is
> > discouraged) or coming up with a different approach -- e.g., avoiding
> > the need for the buffer...
> 
> So a workaround that probably will never break is to adjust your stub to only
> remember the xml fragment for only one (or a few) threads at a time, and
> serve off of that.  That would only be a problem if gdb "goes backwards"
> I.e., if gdb requests a lower offset (other than 0) than the previous
> requested offset.

What I was thinking of doing was having no saved entries or, depending on
GDB details yet to be discovered, one saved entry.

Talk to the core OS people about prohibiting characters that require quoting
from occurring in the thread name.

Compute the maximum potential size of an entry with no padding.

Do arithmetic on the offset to figure out which process table entry to start with.

Do arithmetic on the length to figure out how many entries to process

Pad each entry at the end with spaces to bring it up to the maximum

For dead threads, fill the entry with spaces.

Report done ('l') when there are no more live threads between the current
position and the end of the process table.

> The issue is that qXfer was originally invented for (binary) target objects for
> which gdb wants random access.  However, "threads", and few other target
> objects are xml based.  And for those, it must always be that gdb reads the
> whole object, or at least reads it sequentially starting from the beginning.  I
> can well imagine optimizations where gdb processes the xml as it is reading it
> and stops reading before reaching EOF.  But that wouldn't break the
> workaround.

The qXfer objects for which I am thinking of implementing stub support, fall into
two categories:

. small enough that I would expect GDB to read it in toto in one chunk.
  For example, auxv.  Initially, I will likely have two entries (AT_ENTRY, AT_NULL);
  6 or 7 others might get added later.  Worst case, it all easily fits in one packet.

. larger with structure and possibly variable length elements  -- where I would
  expect multiple sequential reads starting at the beginning and continuing
  until everything is read.  For example, threads with no padding and skipping
  dead threads.

> Starting a read somewhere in the middle of the file could be possible too, but
> it's require understanding how to skip until some xml element starts and
> ignore the fact that the file wouldn't validate.  Plus gdb doesn't know the size
> of the file until it reads it fully, so we'd either some other way to determine
> that, or make gdb take guesses.
> So I'm not seeing this happening anytime soon.

But, alas, the community won't commit to it.

> > So, in terms of saved state, with the snapshot it is 35-36K bytes,
> > with the process table index it is 2-8 bytes.
> >
> > It's too late now, but I would much prefer interfaces something like:
> >
> > either
> >     qfXfer:object:read:annex:length
> >     qsXfer:object:read:annex:length
> > or
> >     qfXfer:object:read:annex
> >     qsXfer:object:read:annex
> >
> > [If the :length wasn't part of the spec, then send as much as you want
> > so long as you stay within the maximum packet size.  My preference
> > would be to leave off the length, but I'd be happy either way.]
> 
> What would you do if the object to retrieve is larger than the maximum
> packet size?

Huh?  qfXfer would read the first part, each subsequent qsXfer would read
the next chunk.  If you wanted to think of it in offset/length terms, the offset
for qfXfer would be zero; for qsXfer it would be the sum of the sizes (ignoring
GDB escaping modifications) of the qfXfer packet and any qsXfer that occurred
after the qfXfer and before this qsXfer.

As now, sub-elements (e.g. <thread> within <threads>) could be contained within
one packet or split between multiple packets.  Put the packets together in the order
received with no white space or anything else between them and pass the result off
to GDB's XML processing.

Or do I not understand your question?

> Thanks,
> Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]