This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
[PATCH] Reduce number of mmap calls from __libc_memalign in ld.so
- From: "H.J. Lu" <hongjiu dot lu at intel dot com>
- To: GNU C Library <libc-alpha at sourceware dot org>
- Date: Sat, 2 Apr 2016 08:34:21 -0700
- Subject: [PATCH] Reduce number of mmap calls from __libc_memalign in ld.so
- Authentication-results: sourceware.org; auth=none
- Reply-to: "H.J. Lu" <hjl dot tools at gmail dot com>
__libc_memalign in ld.so allocates one page at a time and tries to
optimize consecutive __libc_memalign calls by hoping that the next
mmap is after the current memory allocation.
However, the kernel hands out mmap addresses in top-down order, so
this optimization in practice never happens, with the result that we
have more mmap calls and waste a bunch of space for each __libc_memalign.
This change makes __libc_memalign to mmap one page extra. Worst case,
the kernel never puts a backing page behind it, but best case it allows
__libc_memalign to operate much much better. For elf/tst-align --direct,
it reduces number of mmap calls from 12 to 9.
Tested on x86-64. OK for master?
H.J.
---
* elf/dl-minimal.c (__libc_memalign): Mmap one extra page.
---
elf/dl-minimal.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/elf/dl-minimal.c b/elf/dl-minimal.c
index 762e65b..d6f87f1 100644
--- a/elf/dl-minimal.c
+++ b/elf/dl-minimal.c
@@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n)
return NULL;
nup = GLRO(dl_pagesize);
}
+ nup += GLRO(dl_pagesize);
page = __mmap (0, nup, PROT_READ|PROT_WRITE,
MAP_ANON|MAP_PRIVATE, -1, 0);
if (page == MAP_FAILED)
--
2.5.5