[PATCH] lib + libdw: Add and use a concurrent version of the dynamic-size hash table.

Jonathon Anderson jma14@rice.edu
Mon Nov 11 23:38:00 GMT 2019



On Mon, Nov 11, 2019 at 00:23, Mark Wielaard <mark@klomp.org> wrote:
> Hi Jonathon,
> 
> On Fri, 2019-11-08 at 09:28 -0600, Jonathon Anderson wrote:
>>  > So to fix this we do need some mutex to protect the binary search
>>  > tree when calling tsearch/tfind? Or do you see other issues too?
>> 
>>  The search tree can be handled with a mutex, the issue is with
>>  next_{tu,cu}_offset and the general logic of the function. As an
>>  example: suppose two threads look up in the Sig8 for A and see that 
>> its
>>  not currently present. They'll both use __libdw_intern_next_unit to
>>  load CUs (or units, I suppose) until they either find it or run out 
>> of
>>  units.
>> 
>>  If the entirety of intern_next_unit is wrapped in a mutex, one of 
>> the
>>  two will load in the missing entry and finish, while the other has
>>  "missed" it and will keep going until no units remain. The easy
>>  solution is to have the threads check the table again on next_unit
>>  failure for whether the entry has been added, but that incurs a
>>  large-ish overhead for the constant reprobing. The easiest way 
>> around
>>  that is to add an interface on the Sig8 table that returns a 
>> "probe" on
>>  lookup failure that can be continued with only a handful of atomics
>>  (essentially saving state from the first find). The downside to this
>>  approach is that unit parsing is fully serialized.
>> 
>>  If the next_*_offset is handled with a separate mutex or as an 
>> atomic
>>  (say, using fetch_add), the same issue occurs but without the mutex
>>  there's no guarantee that another thread isn't currently parsing and
>>  will write the entry soon, so the easy solution doesn't work. Since 
>> the
>>  Sig8 key is only known after the parsing is complete, we can't even
>>  insert a "in progress" entry. One solution is to allow for duplicate
>>  parsing (but then next_*_offset would have to be updated *after*
>>  Sig8_Hash_insert), another is to use a condition variable on whether
>>  all the units have been parsed (so threads that don't find what 
>> they're
>>  looking for would block until its certain that it doesn't exist).
>> 
>>  Both are viable directions, but neither are trivial.
> 
> Thanks, I missed that completely. We do need to fix this, but I might
> take a look at it after the next release (which I really would like to
> do in about a week). It is indeed not completely trivial. Luckily
> debug_types aren't widely used. But if they are used, it would be bad
> if it would break a concurrent DIE reader.
> 
>>  > Unfortunately we don't always control the data, so bad abbrev 
>> entries
>>  > could happen. The extra alloc wouldn't really "leak" because it 
>> would
>>  > be freed with the Dwarf. So I am not too concerned about that. Is 
>> that
>>  > the worse that can happen in __libdw_getabbrev? When we goto 
>> invalid
>>  > the Dwarf_Abbrev would indeed "leak", but it isn't really lost, it
>>  > will
>>  > get cleaned up when the Dwarf is destroyed.
>> 
>>  It wouldn't "leak," but it would be taking up space until the
>>  dwarf_end. Not that I mind (they're really small).
>> 
>>  I'm thinking more of the case where the Abbrev_insert returns -1 
>> (entry
>>  collision), in that case the new Abbrev would stick around until the
>>  dwarf_end.
> 
> Since the memblocks are per thread, it seems we could easily back out
> an allocation we don't need as long as the thread hasn't done any 
> other
> allocation from the memblock. What do you think of the attached patch?

Looks good to me.

> 
> It is a bit hard to test without a massively parallel testcase where
> things collide a lot. Since you have one, could you try this out?

Everything seems to work fine on this end.

(Plus, if you replace the `Abbrev_find == NULL` with `1`, you can make 
everything collide. Which I did too, and it still works fine.)

> 
> Thanks,
> 
> Mark



More information about the Elfutils-devel mailing list