This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
Re: [PATCH v2] Optimize memory_xfer_partial for remote
- From: Pedro Alves <palves at redhat dot com>
- To: Don Breazeal <donb at codesourcery dot com>, gdb-patches at sourceware dot org, qiyaoltc at gmail dot com
- Date: Thu, 30 Jun 2016 18:06:34 +0100
- Subject: Re: [PATCH v2] Optimize memory_xfer_partial for remote
- Authentication-results: sourceware.org; auth=none
- References: <1467058970-62136-1-git-send-email-donb at codesourcery dot com>
On 06/27/2016 09:22 PM, Don Breazeal wrote:
>>> +/* The default implementation for the to_get_memory_xfer_limit method.
>>> + The hard-coded limit here was determined to be a reasonable default
>>> + that eliminated exponential slowdown on very large transfers without
>>> + unduly compromising performance on smaller transfers. */
>>
>> Where's this coming from? Is this new experimentation you did,
>> or are you talking about Anton's patch?
>
> Both. I did some experimentation to verify that things were significantly
> slower without any memory transfer limit, which they were, although I never
> reproduced the extreme scenario Anton had reported. Presumably the
> performance differences were due to hardware and environment differences.
> Regarding the comment, I thought some explanation of the hard-coded number
> was appropriate. Is there a better or more preferable way to do this, e.g.
> refer to the commit hash, or does it just seem superfluous?
OK, you didn't mention this experimentation, which left me wondering.
Particularly, the mention of "exponential" is what most made me pause,
as it's a qualifier not mentioned elsewhere.
I guess my main problem with the comment is that by reading it in
isolation, one has no clue of how what would cause the slowdown (normally
transferring more at a time is faster!), and thus how to reevaluate
the default in the future. How about extending to something like:
/* The default implementation for the to_get_memory_xfer_limit method.
The hard-coded limit here was determined to be a reasonable default
that eliminated exponential slowdown on very large transfers without
unduly compromising performance on smaller transfers.
This slowdown is mostly caused by memory writing routines doing
unnecessary work upfront when large requests end up usually
only partially satisfied. See memory_xfer_partial's handling of
breakpoint shadows. */
Actually, I was going to approve this with that change, but another
another thought crossed my mind, sorry...
I assume you did this experimentation with remote targets? But this default
will never be used with those, so that experimentation is meaningless for
native targets? Actually, the whole capping is probably pointless with
native targets, since there's really no marshalling and thus no limit.
That'd suggest making the target method return "-1" or some such
to indicate there's no limit. WDTY?
Thanks,
Pedro Alves