This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][BUG 18093] Fix ldconfig segmentation fault with corrupted cache


On 03/09/2015 04:11 PM, Aurelien Jarno wrote:
> On 2015-03-09 09:06, Carlos O'Donell wrote:
>> On 03/08/2015 04:46 PM, Aurelien Jarno wrote:
>>> ldconfig is using an aux-cache to speed up the ld.so.cache update. It
>>> is read by mmaping the file to a structure which contains data offsets
>>> used as pointers. As they are not checked, it is not hard to get
>>> ldconfig to segfault with a corrupted file. This happens for instance if
>>> the file is truncated, which is common following a filesystem check
>>> following a system crash.
>>
>> That makes sense.
>>  
>>> This can be reproduced for example by truncating the file to roughly
>>> half of it's size.
>>>
>>> There is already in some code in elf/cache.c (load_aux_cache) to check
>>> for a corrupted aux cache, but it happens to be broken and not enough.
>>> The test (aux_cache->nlibs >= aux_cache_size) compares the number of
>>> libs entry with the cache size. It's a non sense, as it basically
>>> assumes that each library entry is a 1 byte... Instead the patch below
>>> computes the theoretical cache size using the headers and compares it
>>> to the real size.
>>
>> This is OK.
>>
>>> Another corruption can happen if the pointers to the string table is
>>> corrupted though in that case it means that the file has been more than
>>> just truncated. The patch below ignores entries pointing outside of the
>>> string table, they will be added with the current ldconfig run.
>>
>> This is not OK, since it requires having incorrectly modified the file
>> which we assume cannot happen and slows down the cache access.
> 
> While I don't have a scenario leading to this, it seems it could happen
> in practice. I am not really sure we want to see segmentation faults
> happening in such cases.

How could it happen? Filesystem corruption? Then you've got a lot of
potentially unknown corruption to contend with, why should glibc try
to exchange performance for robustness if the filesystem failed?

> As for the speed, I don't think it's a good argument here. We talk about
> the ldconfig aux cache here not the ld.so.cache where the speed is more
> critical. The patch only adds one integer comparison for each loop,
> while the loop contains calls to malloc or memcpy which take a magnitude
> more time to execute. I don't think the effect is measurable.

You are arguing to increase the complexity of code and reduce the
speed (even if it's small) for a potentially low-probability event
that already has disastrous consequences for the entire system.

If you could come up with even one non-filesystem-corruption event
that caused this problem I might consider it a good change.
 
>>> (This patch is based on an initial patch from Lennart Sorensen).
>>>
>>>
>>> 2015-03-08  Aurelien Jarno  <aurelien@aurel32.net>
>>>
>>> 	[BZ #18093]
>>> 	* elf/cache.c (load_aux_cache): Regenerate the cache if it has the
>>> 	wrong size. Ignore entries pointing outside of the mmaped memory.
>>>
>>> diff --git a/elf/cache.c b/elf/cache.c
>>> index 1732268..9131e08 100644
>>> --- a/elf/cache.c
>>> +++ b/elf/cache.c
>>> @@ -698,7 +698,9 @@ load_aux_cache (const char *aux_cache_name)
>>>    if (aux_cache == MAP_FAILED
>>>        || aux_cache_size < sizeof (struct aux_cache_file)
>>>        || memcmp (aux_cache->magic, AUX_CACHEMAGIC, sizeof AUX_CACHEMAGIC - 1)
>>> -      || aux_cache->nlibs >= aux_cache_size)
>>> +      || aux_cache_size != (sizeof(struct aux_cache_file) +
>>> +			    aux_cache->nlibs * sizeof(struct aux_cache_file_entry) +
>>> +			    aux_cache->len_strings))
>>
>> I'm not happy to have "!=" in the event the file is larger, but has
>> zero padding, thus I would like to see `aux_cache_size >=`.
> 
> While would the file get suddenly zero padded? If anything changes the
> aux-cache without touching the headers accordingly, it's probably safer
> to just regenerate it (even it it takes time) than risking introducing
> corrupted entries in the ld.so.cache.

I withdraw my complaint. The two other good reasons I could come up with
are that it makes hacking the file easier, and it makes it future proof
to allow us to tack on a new header at the end. However, neither of
those are good enough given that the if the additional data can be
written to, it becomes a security problem, whereas if the mapping was
exactly the size required it would make things harder.

Cheers,
Carlos.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]