This is the mail archive of the mailing list for the glibc project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Reduce number of mmap calls from __libc_memalign in

On 02 Apr 2016 08:34, H.J. Lu wrote:
> __libc_memalign in allocates one page at a time and tries to
> optimize consecutive __libc_memalign calls by hoping that the next
> mmap is after the current memory allocation.
> However, the kernel hands out mmap addresses in top-down order, so
> this optimization in practice never happens, with the result that we
> have more mmap calls and waste a bunch of space for each __libc_memalign.
> This change makes __libc_memalign to mmap one page extra.  Worst case,
> the kernel never puts a backing page behind it, but best case it allows
> __libc_memalign to operate much much better.  For elf/tst-align --direct,
> it reduces number of mmap calls from 12 to 9.
> --- a/elf/dl-minimal.c
> +++ b/elf/dl-minimal.c
> @@ -75,6 +75,7 @@ __libc_memalign (size_t align, size_t n)
>  	    return NULL;
>  	  nup = GLRO(dl_pagesize);
>  	}
> +      nup += GLRO(dl_pagesize);

should this be in the else case ?

also the comment above this code needs updating

Attachment: signature.asc
Description: Digital signature

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]