This is the mail archive of the mailing list for the GDB project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

GDB/remote: RSP `g' packet size, advice sought


 I've been struggling with a problem with the RSP `g' packet size, that 
once initialised may only be further shrunk, and never expanded back.  
The issue can be easily reproduced with the MIPS target using QEMU, e.g.:

(gdb) target remote | /path/to/mips-linux-gnu-qemu-system -s -S -p stdio -M mipssim -cpu 24Kc --semihosting --monitor null --serial null -kernel /dev/null
Remote debugging using | /path/to/mips-linux-gnu-qemu-system -s -S -p stdio -M mipssim -cpu 24Kc --semihosting --monitor null --serial null -kernel /dev/null
0x00100000 in ?? ()
(gdb) target remote | /path/to/mips-linux-gnu-qemu-system -s -S -p stdio -M mipssim -cpu 24Kf --semihosting --monitor null --serial null -kernel /dev/null
A program is being debugged already.  Kill it? (y or n) y

QEMU: Terminated via GDBstub

Remote debugging using | /path/to/mips-linux-gnu-qemu-system -s -S -p stdio -M mipssim -cpu 24Kf --semihosting --monitor null --serial null -kernel /dev/null
Remote 'g' packet reply is too long: 

This is because the 24Kc does not support the FPU and the 24Kf does, and 
hence the latter produces a longer `g' reply packet that includes the 
extra FPU state.  However the remote backend has already shrunk its `g' 
packet buffer size when talking to the 24Kc and cannot expand it back.  
The only way to recover is to restart GDB from scratch that can be 

 I have tracked down the cause to be the way the remote backend 
initialises the `g' packet size.  It's only done in init_remote_state that 
is called once, when gdbarch data is initialized.  The initial size is 
calculated based on the maximum number of registers supported by the 

  rsa->regs = GDBARCH_OBSTACK_CALLOC (gdbarch,
				      gdbarch_num_regs (gdbarch),
				      struct packet_reg);
  rsa->sizeof_g_packet = map_regcache_remote_table (gdbarch, rsa->regs);

Then this is further adjusted whenever a `g' packet reply is received in

  if (buf_len > 2 * rsa->sizeof_g_packet)
    error (_("Remote 'g' packet reply is too long: %s"), rs->buf);
  if (buf_len < 2 * rsa->sizeof_g_packet)
      rsa->sizeof_g_packet = buf_len / 2;

-- which is as quoted the place the error message comes from.

 This certainly has to be fixed, but the question is why we only ever 
initialise sizeof_g_packet once?  I agree we should shrink it according to 
the particular remote stub's needs as we need to get the size of the 
corresponding `G' packet right.  However it looks to me like we should 
really reset sizeof_g_packet back to the initial size whenever a remote 
connection is terminated.  Or it should really be enough to do this in 

 Is there any particular reason why we're not doing this?  It seems so 
obvious and the assumption that any subsequent remote connections will 
only produce smaller `g' packets or at worst ones of the same size seems a 
little bit odd to me?

 It looks to me like remote_open_1 should simply call init_remote_state 
directly and then remote_close call a complement that we don't have yet 
that would free up structures allocated in init_remote_state.  Does my 
understanding seem reasonable?

 Thanks for any feedback.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]