This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: A per-user or per-application ld.so.cache?


On 02/08/2016 11:29 PM, Ben Woodard wrote:

> I just talked to one of the developers to get a good sense of the current problem. 
> The sum of the on-disk file ELF files including debuginfo for one app that we looked at is around 3GB but when we just look at the text in all the ELF files it is 100-200MB depending on architecture spread across about 1400 DSOs. 

This means that copying the text files together into a single file would
be feasible.

> Except for the fact that the process is starting on literally thousands of nodes simultaneously and its libraries are scattered around about 15 non-system project directories. This leads to a phenomenal number of NFS operations as the compute nodes search through 20 or so directories for all their components. That brings even very powerful NFS servers to their knees. 

Okay, this is the critical bit which was missing so far.  I think Linux
has pretty good caching for lookup failures, so the whole performance
issue was a bit puzzling.  If the whole thing runs on many nodes against
storage which lacks such caching, then I can see that this could turn
into a problem.

The main question is: Will the storage be able to cope with millions of
file opens if they magically pick the right file name (avoiding ENOENT)?
 If not, the only viable optimization seems to be the single file approach.

How will the storage react to parallel read operations on those 15
directories from many nodes?

I'm worried a bit that this turns into a request to tune ld.so to very
peculiar storage stack behavior.


Depending on what they do with Python, the Python module importer will
still cause a phenomenal amount of ENOENT traffic, and there is nothing
we can do about that because it's not related to dlopen.

Florian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]