This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead
- From: Rich Felker <dalias at libc dot org>
- To: Carlos O'Donell <carlos at redhat dot com>
- Cc: Mel Gorman <mgorman at suse dot de>, libc-alpha at sourceware dot org
- Date: Wed, 11 Feb 2015 08:26:31 -0500
- Subject: Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead
- Authentication-results: sourceware.org; auth=none
- References: <20150209140608 dot GD2395 at suse dot de> <54D91E06 dot 7060603 at redhat dot com>
On Mon, Feb 09, 2015 at 03:52:22PM -0500, Carlos O'Donell wrote:
> On 02/09/2015 09:06 AM, Mel Gorman wrote:
> > while (data_to_process) {
> > buf = malloc(large_size);
> > do_stuff();
> > free(buf);
> > }
>
> Why isn't the fix to change the application to hoist the
> malloc out of the loop?
I understand this is impossible for some language idioms (typically
OOP, and despite my personal belief that this indicates they're bad
language idioms, I don't want to descend into that type of argument),
but to me the big question is:
Why, when you have a large buffer -- so large that it can effect
MADV_DONTNEED or munmap when freed -- are you doing so little with it
in do_stuff() that the work performed on the buffer doesn't dominate
the time spent?
This indicates to me that the problem might actually be significant
over-allocation beyond the size that's actually going to be used. Do
we have some real-world specific examples of where this is happening?
If it's poor design in application code and the applications could be
corrected, I think we should consider whether the right fix is on the
application side.
Rich