This is the mail archive of the
mailing list for the glibc project.
Re: A per-user or per-application ld.so.cache?
- From: "Carlos O'Donell" <carlos at redhat dot com>
- To: Siddhesh Poyarekar <sid at reserved-bit dot com>
- Cc: GNU C Library <libc-alpha at sourceware dot org>
- Date: Mon, 8 Feb 2016 22:35:17 -0500
- Subject: Re: A per-user or per-application ld.so.cache?
- Authentication-results: sourceware.org; auth=none
- References: <56B8E105 dot 8030906 at redhat dot com> <20160208191155 dot GB1904 at devel dot intra dot reserved-bit dot com> <56B8F710 dot 8050108 at redhat dot com> <20160209032921 dot GC1904 at devel dot intra dot reserved-bit dot com>
On 02/08/2016 10:29 PM, Siddhesh Poyarekar wrote:
> On Mon, Feb 08, 2016 at 03:14:08PM -0500, Carlos O'Donell wrote:
>> The downside is that the user has no control of this cache and
>> would need administrative intervention for help accelerating their
>> application. Consider that you bought time on a cluster of machines,
>> and now to run your app you're making the user interact with the sysadmin
>> to install new filters and run ldconfig on every node? It won't scale
>> (from a human perspective).
>> With a per-user/per-proces cache say in ~/.ld.so.cache, the user could
>> prime the cache themselves after setting up their application with
>> bundled libraries and have it work as expected, but accelerated lookups
>> without lots of stat/getdents in $HOME.
>> Does that counter-argument make sense for why the cache could be
>> under user control? It means the data needs to be inspected carefully
> Sure, but a similar effect could also be achieved using
> LD_LIBRARY_PATH in ~/.bashrc.
Not similar enough from a performance perspective.
If you have 15 paths in LD_LIBRARY_PATH, they each need to be searched
in order to find the DSOs required in the last path entry. If you had
a per-user cache it's a single cache lookup and an mmap. There is no
traversal required of any filesystem if you get a hit in the cache.
Isn't that much better?
All of the cache machinery is there, we just don't have a per-user