This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Fix tcache count maximum


Hi Carlos,

>> glibc/manual/tunables.texi:
>>
>> 195 The approximate maximum overhead of the per-thread cache is thus equal
>> 196 to the number of bins times the chunk count in each bin times the size
>> 197 of each chunk.  With defaults, the approximate maximum overhead of the
>> 198 per-thread cache is approximately 236 KB on 64-bit systems and 118 KB
>> 199 on 32-bit systems.
>> 200 @end deftp
> 
> That is the maximum size of the blocks contained in the tcache, not the size
> overhead of the tcache datastructure itself. My original change would add just 64
> bytes, but even if we made the count array a size_t, it would add 448 bytes on a
> 64-bit target, ie. a tiny fraction of the maximum tcache size of 236KB.

> Thanks for reviewing that. I wonder if we shouldn't just say 256KiB here and
> 128KiB respectively, so give round easy to understand values which are *higher*
> than expected to allow for this kind of change?

Well the text is quite misleading already. Firstly blocks contained in tcache are
not "overhead". It's the maximum amount of free memory that tcache can hold
per thread. However few applications use blocks of size 16 and 32 and 48 and 64
all the way up to 1KB. So the typical amount is a tiny fraction of the maximum.
This memory is not leaked since it is still available to that thread. It's just that there
isn't a mechanism to reclaim it if a thread does no further allocations but doesn't
exit either.

Secondly a single free block in tcache can block a whole multi-gigabyte arena
from being freed and returned to the system. That's a much more significant
bug than this maximum "overhead".

Wilco


    

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]