This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead
- From: Mel Gorman <mgorman at suse dot de>
- To: Rich Felker <dalias at libc dot org>
- Cc: Carlos O'Donell <carlos at redhat dot com>, libc-alpha at sourceware dot org
- Date: Wed, 11 Feb 2015 14:19:26 +0000
- Subject: Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead
- Authentication-results: sourceware.org; auth=none
- References: <20150209140608 dot GD2395 at suse dot de> <54D91E06 dot 7060603 at redhat dot com> <20150211132631 dot GU23507 at brightrain dot aerifal dot cx> <20150211133453 dot GA31102 at suse dot de> <20150211140737 dot GX23507 at brightrain dot aerifal dot cx>
On Wed, Feb 11, 2015 at 09:07:37AM -0500, Rich Felker wrote:
> On Wed, Feb 11, 2015 at 01:34:53PM +0000, Mel Gorman wrote:
> > > This indicates to me that the problem might actually be significant
> > > over-allocation beyond the size that's actually going to be used. Do
> > > we have some real-world specific examples of where this is happening?
> >
> > In the case of ebizzy, it is the case that do_stuff is so small that the
> > allocation/free cost dominates.
>
> ebizzy is supposed to be a benchmark simulating typical workload,
> right?
Right - web application server workload specifically. In reality, I think
it would depend on what lanaguage said workload was implemented in. Java
workloads would not hit the glibc allocator at all for example.
> If so, I think this specific test operation is a mistake, and I
> think glibc should be cautious about optimizing for benchmarks that
> don't reflect meaningful real-world usage.
>
That rules out the complex approach in V1 at least.
> > In the cases where I've seen this happen
> > on other workloads (firefix, evolution, mariadb during database init from
> > system) the cost of the operations on the buffer dominated. The malloc/free
> > cost was there but the performance difference is in the noise.
>
> If it's not distinguishable from noise in actual usage cases, then I'm
> skeptical that there's a need to fix this issue.
>
I take that is a NAK for v1 of the patch. How about V2? It is expected
that heap trims can be controlled with tuning parameters but right now,
it's not possible to tune the trim threshold for per-thread heaps. V2 of
the patch fixes that and at least is consistent behaviour.
--
Mel Gorman
SUSE Labs