A per-user or per-application ld.so.cache?
Florian Weimer
fweimer@redhat.com
Thu Apr 6 13:01:00 GMT 2017
On 03/08/2016 11:37 AM, Florian Weimer wrote:
> On 02/15/2016 07:30 PM, Ben Woodard wrote:
>> Iâve been talking to the HPC tools and system guys and to my surprise they favor Florianâs approach which is to change glibc ld.so to cache the full directories of the visited in the process of finding a library. Subsequent lookups would first look in this cache before looking in subsequent directories in library search paths.
>
> Thanks.
>
> Before we start working on this, I would like to double-check that their
> storage copes reasonably well with parallel readdir load.
>
> Could you ask them to run the attached benchmark program on their
> cluster, in a massively parallel fashion? All the directories on a
> typical library search path have to be listed as command line arguments
> (separately, i.e. not joined as one argument and separated with colons).
>
> The results will show if the directory listing overhead is acceptable.
> It is unlikely that an ld.so implementation Median and maximum job
> execution time should be sufficient, but the benchmark program produces
> additional diagnostic output to identify specific bottlenecks. For
> example, if the file system reports a large block size, opendir may
> allocate an equally large amount of memory.
Hi Ben,
have you been able to run the benchmark? Did the storage hold up well
under the severe readdir load?
Thanks,
Florian
More information about the Libc-alpha
mailing list