Dynamic linking optimizations

Ian Lance Taylor iant@google.com
Tue Mar 3 15:57:00 GMT 2009


John Richard Moser <nigelenki@comcast.net> writes:

> Let's start playing with theory (nobody cares, show us the code!  I
> don't have any).  I say something blindingly obvious, and you tell me
> why this is an obviously bad approach.  If I can A) prove you wrong;
> or B) change the approach to erase the bad stuff, then I have a better
> solution.  If not, then the obvious has happened:  A bunch of people
> who are a hell of a lot smarter than me came up with something much
> better than I could this morning over coffee, and I'll be unplugging
> my speakers before they start emitting penis-shaped sound waves.

Sorry, you have to show us the code.  I skimmed your algorithm, and I
agree that it should work.  However, you are ignoring constant factors.
Character comparisons are fast.  It doesn't help to minimize them if it
makes each operation cost more on average.  Also, the number of hash
table collisions is highly relevant.  The linker tries to minimize them,
via the simple expedient of using more hash table buckets.  In
particular the linker will normally use at more buckets than there are
symbols, so the average number of hash table collisions is zero.

So, you might be right that your algorithm is faster, but you aren't
obviously right.  You have to actually implement it and measure it.

A more relevant problem is that when you have lots of shared libraries,
you have to look in lots of hash tables.  A couple of different
solutions have been proposed for that, but the glibc maintainers have
declined to accept them.

Ian



More information about the Binutils mailing list