[Proposed patch] Huge performance regression in ld -r since binutils >= 2.21
Mon Nov 9 09:05:00 GMT 2015
On Mon, 9 Nov 2015, Alan Modra wrote:
> On Fri, Nov 06, 2015 at 05:22:56PM +0100, Romain Geissler wrote:
> > When we create the first level module1.o and module2.o, everything works
> > fine. ld -r last less than 10s. However, that creates huges .o files,
> > having huge debug sections with millions of relocs. Later when we build
> > module3.o from module1.o and module2.o, since the memmove happens on huge
> > relocs array, ld end up consuming 100% CPU for minutes. With binutils
> > 2.25.1 one single link operation lasts more than 5 minutes, compared to
> > 10s with ld 2.20.
> I've been telling people for at least 10 years not to use ld -r as a
> packaging tool, so I'm tempted to say this as a good result..
Well indeed there was recently a regression in ld -r affecting kernel
people (the stable sort patch), and you replied exactly that. I know you
advocates against using it, but may I know why ? What's the problem with
partial linking ? Is it officially deprecated ?
I have no problem with changing it for all my company's projects, using
static libraries linked with --whole-archive should be equivalent, but I
am just curious about the motivations.
> > I have a proposal of patch to fix that, see the patch attached. Note that
> > I know it only works for x86/x64 and that it will break other targets, I
> > just want to make sure you agree on the idea of the fix. To fix this
> > performance issue, I choose to have two iterators:
> The idea is good, but the implementation horrible due to trying to
> keep RELOC_AGAINST_DISCARDED_SECTION. I think you should just throw
> that away and start over.
Indeed, much better idea. I will use your patch and apply it to x86/64 as
More information about the Binutils