This is the mail archive of the
gdb-patches@sourceware.org
mailing list for the GDB project.
[PATCH 3/3] Optimize byte-aligned copies in copy_bitwise()
- From: Andreas Arnez <arnez at linux dot vnet dot ibm dot com>
- To: gdb-patches at sourceware dot org
- Date: Mon, 14 Nov 2016 16:02:26 +0100
- Subject: [PATCH 3/3] Optimize byte-aligned copies in copy_bitwise()
- Authentication-results: sourceware.org; auth=none
- References: <1479135786-31150-1-git-send-email-arnez@linux.vnet.ibm.com>
The function copy_bitwise used for copying DWARF pieces can potentially
be invoked for large chunks of data. For instance, consider a large
struct one of whose members is currently located in a register. In this
case copy_bitwise would still copy the data bitwise in a loop, which is
much slower than necessary.
This change uses memcpy for the large part instead, if possible.
gdb/ChangeLog:
* dwarf2loc.c (copy_bitwise): Use memcpy for the middle part, if
it is byte-aligned.
---
gdb/dwarf2loc.c | 27 +++++++++++++++++++++++----
1 file changed, 23 insertions(+), 4 deletions(-)
diff --git a/gdb/dwarf2loc.c b/gdb/dwarf2loc.c
index 3a241a8..26f6bd8 100644
--- a/gdb/dwarf2loc.c
+++ b/gdb/dwarf2loc.c
@@ -1547,11 +1547,30 @@ copy_bitwise (gdb_byte *dest, ULONGEST dest_offset,
{
size_t len = nbits / 8;
- while (len--)
+ /* Use a faster method for byte-aligned copies. */
+ if (avail == 0)
{
- buf |= *(bits_big_endian ? source-- : source++) << avail;
- *(bits_big_endian ? dest-- : dest++) = buf;
- buf >>= 8;
+ if (bits_big_endian)
+ {
+ dest -= len;
+ source -= len;
+ memcpy (dest + 1, source + 1, len);
+ }
+ else
+ {
+ memcpy (dest, source, len);
+ dest += len;
+ source += len;
+ }
+ }
+ else
+ {
+ while (len--)
+ {
+ buf |= *(bits_big_endian ? source-- : source++) << avail;
+ *(bits_big_endian ? dest-- : dest++) = buf;
+ buf >>= 8;
+ }
}
nbits %= 8;
}
--
2.3.0