This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] getsysstats: use sysinfo() instead of parsing /proc/meminfo


On Thu, Aug 13, 2015 at 03:16:36PM +0200, Rasmus Villemoes wrote:
> Profiling git's test suite, Linus noted [1] that a disproportionately
> large amount of time was spent reading /proc/meminfo. This is done by
> the glibc functions get_phys_pages and get_avphys_pages, but they only
> need the MemTotal and MemFree fields, respectively. That same
> information can be obtained with a single syscall, sysinfo, instead of
> six: open, fstat, mmap, read, close, munmap. While sysinfo also
> provides more than necessary, it does a lot less work than what the
> kernel needs to do to provide the entire /proc/meminfo. Both strace -T
> and in-app microbenchmarks shows that the sysinfo() approach is
> roughly an order of magnitude faster.
> 
> sysinfo() is much older than what glibc currently requires, so I don't
> think there's any reason to keep the old parsing code. Moreover, this
> makes get_[av]phys_pages work even in the absence of /proc.
> 
> Linus noted that something as simple as 'bash -c "echo"' would trigger
> the reading of /proc/meminfo, but gdb says that many more applications
> than just bash are affected:
> 
> Starting program: /bin/bash "-c" "echo"
> 
> Breakpoint 1, __get_phys_pages () at ../sysdeps/unix/sysv/linux/getsysstats.c:283
> 283     ../sysdeps/unix/sysv/linux/getsysstats.c: No such file or directory.
> (gdb) bt
> #0  __get_phys_pages () at ../sysdeps/unix/sysv/linux/getsysstats.c:283
> #1  0x00007ffff76d21c5 in posix_sysconf (name=<optimized out>) at ../sysdeps/posix/sysconf.c:634
> #2  linux_sysconf (name=<optimized out>) at ../sysdeps/unix/sysv/linux/x86_64/../sysconf.c:136
> #3  *__GI___sysconf (name=85) at ../sysdeps/unix/sysv/linux/x86_64/sysconf.c:37
> #4  0x00007ffff765b02a in *(int0_t, long double) (b=<optimized out>, n=76, s=18446744073708915495, cmp=0x472e30, arg=0x0) at msort.c:188
> #5  0x000000000042210e in ?? ()
> #6  0x000000000042065d in main ()
> 
> So it seems that any application that uses qsort on a moderately sized
> array will incur this cost (once), which is obviously proportionately
> more expensive for lots of short-lived processes (such as the git test
> suite).
> 
> [1] http://thread.gmane.org/gmane.linux.kernel/2019285
>

Looks almost ok.

There are two details, first is that we memunit needs to be power of
two. Second comment is couldn't simply you use uint64 for computation.
As product is physical memory it should fit into 64 bits.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]