This is the mail archive of the gdb-patches@sourceware.org mailing list for the GDB project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH v2] Optimize memory_xfer_partial for remote


On 06/27/2016 09:22 PM, Don Breazeal wrote:

>>> +/* The default implementation for the to_get_memory_xfer_limit method.
>>> +   The hard-coded limit here was determined to be a reasonable default
>>> +   that eliminated exponential slowdown on very large transfers without
>>> +   unduly compromising performance on smaller transfers.  */
>>
>> Where's this coming from?  Is this new experimentation you did,
>> or are you talking about Anton's patch?
> 
> Both.  I did some experimentation to verify that things were significantly
> slower without any memory transfer limit, which they were, although I never
> reproduced the extreme scenario Anton had reported.  Presumably the
> performance differences were due to hardware and environment differences.
> Regarding the comment, I thought some explanation of the hard-coded number
> was appropriate.  Is there a better or more preferable way to do this, e.g.
> refer to the commit hash, or does it just seem superfluous?

OK, you didn't mention this experimentation, which left me wondering.
Particularly, the mention of "exponential" is what most made me pause,
as it's a qualifier not mentioned elsewhere.

I guess my main problem with the comment is that by reading it in
isolation, one has no clue of how what would cause the slowdown (normally
transferring more at a time is faster!), and thus how to reevaluate
the default in the future.  How about extending to something like:

/* The default implementation for the to_get_memory_xfer_limit method.
   The hard-coded limit here was determined to be a reasonable default
   that eliminated exponential slowdown on very large transfers without
   unduly compromising performance on smaller transfers.  
   This slowdown is mostly caused by memory writing routines doing
   unnecessary work upfront when large requests end up usually
   only partially satisfied.  See memory_xfer_partial's handling of
   breakpoint shadows.  */

Actually, I was going to approve this with that change, but another
another thought crossed my mind, sorry...

I assume you did this experimentation with remote targets?  But this default
will never be used with those, so that experimentation is meaningless for
native targets?  Actually, the whole capping is probably pointless with
native targets, since there's really no marshalling and thus no limit.
That'd suggest making the target method return "-1" or some such
to indicate there's no limit.  WDTY?

Thanks,
Pedro Alves


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]