This is the mail archive of the mailing list for the GDB project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 2/2] arm-tdep: sort mapping symbols after parsing all minimal symbols

> On 25 Jun 2019, at 02:37, Simon Marchi <> wrote:
> Somebody on IRC reported a while ago that loading a big ARM program in
> GDB was very slow.  Their profiling pointed out that a big amount of
> time was spent in
>   VEC_safe_insert (arm_mapping_symbol_s, *map_p, idx, &new_map_sym);
> I was able to verify this as well.
> ARM mapping symbols are special ELF symbols named $a, $d and $t
> indicating that symbols starting at this address up to the next mapping
> symbol (in terms of address) are of type "ARM code", "data" and "Thumb
> code", respectively.  GDB records these symbols in vectors (one for each
> section) in arm-tdep.c.  These vectors are sorted by symbol address, to
> allow for quick lookup.  The current approach is to insert new symbols
> at the right position to keep the vectors sorted at all time.  This is
> done based on the assumption that mapping symbols come already almost
> sorted from the binary, as explains this comment in
> arm_record_special_symbol:
> /* Assume that most mapping symbols appear in order of increasing
>    value.  If they were randomly distributed, it would be faster to
>    always push here and then sort at first use.  */

I’ve been wondering where this assumption came from, whether there is any
evidence to back it up, one way or the other.

Sadly, the original patch doesn’t give anymore clues (it was pushed without
any obvious review):

I had a look at some binaries:
*gdb itself and binaries built by the test suite - random order.
*system binaries on Aarch32 Linux - no mapping symbols.

I then looked at a binary built with the Arm Compiler, and found that the
symbols are in increasing numerical order.  That might explain where the
assumption came from.

> Well, it turns out this is not the case.  The original reporter
> mentioned that mapping symbols in their binaries are not nearly sorted,
> and this is not my experience either (at least in the binary used in the
> benchmark below).  So if the values don't come nearly sorted, doing
> insertions to keep the vectors sorted ends up being of the order of
> number_of_mapping_symbols ^ 2.
> This patch changes it just like the comment above says, to just append
> to the vector in arm_record_special_symbol and sort the vector on first
> use.
> Benchmark
> =========
> I have done some benchmarks using an --enable-targets=all GDB, compiled
> with -O2, running on x86-64 and parsing file
> dce18d22e5c2ecb6a3a57372f4e6ef614130bc.debug from this package:
> This file is the separate debug info for (part of firefox) for
> ARM.
> I have added some traces to measure the execution time of just
> elf_symtab_read and ran GDB like this:
> ./gdb --data-directory=data-directory -nx -batch .../path/to/usr/lib/debug/.build-id/65/dce18d22e5c2ecb6a3a57372f4e6ef614130bc.debug
> Since the new code sorts the vectors on first use, it would be difficult
> to benchmark it as-is and be fair, since the "before" version does more
> work in elf_symtab_read.  So I have actually benchmarked a version of
> the patch that did sort all the vectors at the end of elf_symtab_read,
> so the sorting would be considered in the measured execution time.
> Here's the measured execution time of elf_symtab_read, averaged on 3
> runs:
> insert sorted (before): 28.678s
> sort after (after):      1.760s
> And here's the total execution time of the command above (just one run).
> The time is now mostly spent in reading DWARF.
> insert sorted: 71.12s user 2.71s system 99% cpu 1:14.03 total
> sort after:    46.42s user 2.60s system 99% cpu  49.147 total
> I tried for fun on my Raspberry Pi 3, the run time of
> elf_symtab_read goes from ~259s to ~9s, reading the same file.

That’s quite a significant change!

What would the effect be with your code if the symbol addresses were already
sorted - would we expect a slowdown compared with the existing version?

I’ve built this patch series and ran the test suite on an Aarch32 box, and
saw no regressions.

As far as the code changes go, the patch below LGTM.

I looked at the previous path, I’m not an expert on std:vector, but the changes
LGTM too.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]