Range lists, zero-length functions, linker gc

David Blaikie dblaikie@gmail.com
Tue Jun 2 18:06:10 GMT 2020


On Tue, Jun 2, 2020 at 9:50 AM Mark Wielaard <mark@klomp.org> wrote:
>
> Hi,
>
> On Mon, 2020-06-01 at 13:18 -0700, David Blaikie wrote:
> > On Mon, Jun 1, 2020 at 2:31 AM Mark Wielaard <mark@klomp.org> wrote:
> > > Each skeleton compilation unit has a DW_AT_dwo_name attribute which
> > > indicates the .dwo file where the split unit sections can be found. It
> > > actually seems seems easier to generate a different one for each
> > > skeleton compilation unit than trying to combine them for all the
> > > different skeleton compilation units you produce.
> > >
> > > > Certainly Bazel (& the internal Google version used to build most
> > > > Google software) can't handle an unbounded/unknown number of output
> > > > files from a build action.
> > >
> > > Yes, in principle .dwo files seems troublesome for build systems in
> > > general.
> >
> > They're pretty practical when they're generated right next to the .o
> > file & that's guaranteed by the compiler. "if you generate x.o, there
> > will be x.dwo next to it" - that's certainly how Bazel deals with
> > this. It doesn't parse the DWARF at all - knowing where the .dwo files
> > are along with the .o files.
>
> The DWARF spec makes it clear that a DWO is per CU, not per object
> file. So when an object file contains multiple CUs, it might also be
> associated with multiple .dwo files (as is also the case with a linked
> executable or shared library). The spec makes says the DW_AT_dwo_name
> can contain both a (relative) file or a path to the associated DWO
> file. Which means that relying on a one-to-one mapping from .o to .dwo
> is fragile and is likely to break when tools start using multiple CUs
> or different naming heuristics.

Yep, agreed - in the most general form there's no guarantee that one
compilation would produce one .dwo and you'd have to parse the .o to
find all the associated .dwos. Practically speaking that's not the
reality right now (build systems rely on stronger/narrower guarantees
by the compiler about how many/where the .dwo files are).

> > > Because of that I am
> > > actually a fan of the SHF_EXCLUDED hack that simply places the split
> > > .dwo sections in the same object file. For the above that would mean,
> > > just place them in the same section group.
> >
> > This was a newer feature added during standardization of Split DWARF,
> > which is handy for some users
>
> Although it is used in practice by some producers, it is not
> standardize (yet). Also because SHF_EXCLUDED isn't standardized
> (although it is used consistently for those arches that support it).

Ah, sorry, I didn't mean the specific implementation strategy of using
SHF_EXCLUDED, I meant the general concept of having a .o file be its
own .dwo file is standardized "The sections that do not require
relocation, however, can be written to the relocatable object (.o)
file but ignored by the linker, or they can be written to a separate
DWARF object (.dwo) file that need not be accessed by the linker."

> >  - but doesn't address the needs of the
> > original design of Split DWARF (for Google) - a distributed build
> > system that is trying to avoid moving more bytes than it must to one
> > machine to run the link step. So not having to ship all the DWARF
> > bytes to one machine for interactive debugging (pulling down from a
> > distributed file system only the needed .dwo files during debugging -
> > not all of them) - or at least being able to ship all the .dwo files
> > to one machine to make a .dwp, and ship all the .o files to another
> > machine for the link.
>
> I think that is not what most people would use split-dwarf for.

Probably not - but it's the use case I care about/need to support.

>  The
> Google setup seems somewhat unique. Most people probably do compiling,
> linking and debugging on the same machine. The main use case (for me)
> is to speed up the edit-compile-debug cycle. Making sure that the
> linker doesn't have to deal with (most of) the .debug sections and can
> just leave them behind (either in the .o file, or a separate .dwo file)
> is the main attraction of split-dwarf IMHO. When actually producing
> production builds with debug you still pay the price anyway, because
> instead of the linker, you now need to build your dwp packages which
> does most of the same work the linker would have done anyway (combining
> the data, merging the string indexes, deduplicating debug types, etc.)

It's still a price you can parallelize, rather than having to
serialize (somewhat - lld is multithreaded for instance). And the dwp
support for linking other dwp files together means you can do it
iteratively (rather than taking all the .dwo files and doing noe big
link step - you can take a few dwos, link them into an intermediate
dwp (removing duplicate type information and strings) then link again
with other intermediate dwps, etc - with some distribution/parallelism
benefits).

> > > > Multiple CUs in a single .dwo file is not really supported, which
> > > > would be another challenge (we had to compromise debug info quality a
> > > > little because of this limitation when doing ThinLTO - unable to emit
> > > > multiple CUs into each thin-linked .o file) - at which point maybe the
> > > > compiler'd need to produce an intermediate .dwp file of sorts...
> > >
> > > Are you sure?
> >
> > Fairly sure - I worked in depth on the implementation of ThinLTO &
> > considered a variety of options trying to support Split DWARF in that
> > situation.
> >
> > >  Each CU would have a separate dwo_id field to
> > > distinquish them. At least that is how elfutils figures out which CU
> > > in a dwo file matches a given skeleton DIE. This should work the same
> > > as for type units, you can have multiple type untis in the same file
> > > and distinquish which one you need by matching the signature.
> >
> > One of the complications is that it increased the complexity of making
> > a .dwp file - Split DWARF is spec'd to ensure that the linking process
> > is as lightweight as possible. Not having the size overhead of
> > relocations (though trading off more indirection through the cu_index,
> > debug_str_offsets, etc). Oh right... that was the critical issue:
> > There was no way I could think of to do cross-CU references in Split
> > DWARF (cross-CU references being critical to LTO - inlining from one
> > CU into another, etc). Because there was no relocation processing in
> > dwp generation. Arguably maybe one could use a sec_offset that's
> > resolved relative to a local range within the contributions described
> > by the cu_index - but the cu_index must have one entry per unit (the
> > entries are keyed on unit) - I guess you could have a single entry per
> > CU, but have those entries overlap (so all the CUs from one dwo file
> > get separate index entries that contain the same contribution ranges).
> > Then consumers would have to search through the debug_info
> > contribution to find the right unit.... defeating some of the value of
> > the index.
>
> I think we are drifting somewhat away from the original topic and/or
> are talking past each other. We somehow combined the topics of doing
> LTO with using Split DWARF, while we started with whether a DWARF
> producer like a compiler that generated separate functions in separate
> ELF sections could also generate the associated DWARF in separate
> sections. I believe it can, and it can even do so when generating Split
> DWARF. You see some practical issues, especially when combining an LTO
> build together with generating Split DWARF. But before we try to
> resolve those issues, maybe we should take a step back and see which
> issue we are really trying to solve.
>
> I do think combining Split DWARF and LTO might not be the best
> solution. When doing LTO you probably want something like GCC Early
> Debug, which is like Split DWARF, but different, because the Early
> Debug simply doesn't contain any address (ranges) yet (not even through
> indirection like .debug_addr).

I don't think Early Debug fits here - it seems like it was
specifically for DWARF that doesn't refer to any code (eg: function
declarations and type definitions). I don't see how it could be used
for the actual address-referencing DWARF needed to describe function
definitions.

> > > > & again the overhead of all those separate contributions, headers,
> > > > etc, turns out to be not very desirable in any case.
> > >
> > > Yes, I agree with that. But as said earlier, maybe the compiler
> > > shouldn't have generated to code/data in the first place?
> >
> > In the (especially) C++ compilation model, I don't believe that's
> > possible - inline functions, templates, etc, require duplication -
> > unless you have a more complicated build process that can gather the
> > potential duplication, then fan back out again to compile, etc.
> > ThinLTO does some of this - at a cost of a more complicated build
> > system, etc.
>
> It might be useful for the original discussion to have a few more
> concrete examples to show when you might have unused code that the
> linker might want to discard, but where the compiler could only produce
> DWARF in one big blob. Apart of the -ffunction-sections case,

Function sections, inline functions, function templates are core examples.

> where I
> would argue the compiler simply needs to make sure that if it generates
> code in separate sections it also should create the DWARF separate
> section (groups).

I don't think that's practical - the overhead, I believe, is too high.
Headers for each section contribution (ELF headers but DWARF headers
moreso - having a separate .debug_addr, .debug_line, etc section for
each function would be very expensive) would make for very large
object files.

- Dave


More information about the Elfutils-devel mailing list