This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 2/2] S390: Fix gdbserver support for TDB


On 12/03/2014 06:18 PM, Andreas Arnez wrote:
> On Tue, Dec 02 2014, Andreas Arnez wrote:
>> On Tue, Dec 02 2014, Pedro Alves wrote:
>>> It probably doesn't hurt to be explicit, but I should note that
>>> registers' are unavailable by default on gdbserver, so a
>>> 'if (buf == NULL) return;' probably would do as well:
>>>
>>> struct regcache *
>>> init_register_cache (struct regcache *regcache,
>>> 		     const struct target_desc *tdesc,
>>> 		     unsigned char *regbuf)
>>> ...
>>>       regcache->register_status = xcalloc (1, tdesc->num_registers);
>>>       gdb_assert (REG_UNAVAILABLE == 0);
>>
>> In general, if a prior call to fetch_inferior_registers has filled the
>> regset already, I'd expect the store function to reset the registers to
>> "unavailable" again.  Otherwise we may have stale left-overs from
>> before.
> 
> Hm, I noticed that this probably deserves some more explanation.
> 
> While it is true that the registers are marked unavailable when
> initializing a new regcache, the regcache seems to survive without
> another initialization between calls to fetch_inferior_registers.  I've
> verified this in my tests, and I've also not seen any code that would
> perform such a re-initialization.

Hmm, good find.

> 
> I wonder why that is the case, and whether we would like to change that.

"why" is just that nothing ever stumbled on this.  The case that required
teaching gdbserver about unavailable registers is tracepoint traceframes,
in which case the regcache is always a new one:

server.c:

    case 'g':
      require_running (own_buf);
      if (current_traceframe >= 0)
	{
	  struct regcache *regcache
	    = new_register_cache (current_target_desc ());


> If so, the patch could avoid touching ARM code, wouldn't need special
> treatment of NULL in the TDB store function, and would treat ENODATA
> like any other error from ptrace, except that the warning would be
> suppressed.  I think this would also improve the behavior of other
> errors from ptrace, but maybe there's a good reason for falling back to
> the "last known" register values in this case.  Or maybe there's a
> performance reason for avoiding the re-initialization?
> 
> For illustration, why don't we do something like the (untested) patch
> below?
> 
> --
> diff --git a/gdb/gdbserver/regcache.c b/gdb/gdbserver/regcache.c
> index 718ae8c..b0f6a22 100644
> --- a/gdb/gdbserver/regcache.c
> +++ b/gdb/gdbserver/regcache.c
> @@ -52,6 +52,8 @@ get_thread_regcache (struct thread_info *thread, int fetch)
>        struct thread_info *saved_thread = current_thread;
>  
>        current_thread = thread;
> +      memset (regcache->register_status, REG_UNAVAILABLE,
> +	      regcache->tdesc->num_registers);

This makes sense to me, it's similar to gdb's own handling.
See gdb/regcache.c:regcache_raw_read (and regcache_invalidate).

Can you check the patch on x86 too, please?  You'll need the
same #ifdef guard as init_register_cache uses; s390
doesn't build the IPA.

>        fetch_inferior_registers (regcache, -1);
>        current_thread = saved_thread;
>        regcache->registers_valid = 1;

Thanks,
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]