This is the mail archive of the
libc-alpha@sourceware.org
mailing list for the glibc project.
Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead
- From: "Carlos O'Donell" <carlos at redhat dot com>
- To: Mel Gorman <mgorman at suse dot de>, libc-alpha at sourceware dot org
- Date: Mon, 09 Feb 2015 15:52:22 -0500
- Subject: Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead
- Authentication-results: sourceware.org; auth=none
- References: <20150209140608 dot GD2395 at suse dot de>
On 02/09/2015 09:06 AM, Mel Gorman wrote:
> while (data_to_process) {
> buf = malloc(large_size);
> do_stuff();
> free(buf);
> }
Why isn't the fix to change the application to hoist the
malloc out of the loop?
buf = malloc(large_size);
while (data_to_process)
{
do_stuff();
}
free(buf);
Is it simply that the software frameworks themselves are
unable to do this directly?
I can understand your position. Ebizzy models the workload and
you use the workload model to improve performance by changing
the runtime to match the workload.
The problem I face as a maintainer is that you've added
complexity to malloc in the form of a decaying counter, and
I need a strong justification for that kind of added complexity.
For example, I see you're from SUSE, have you put this change
through testing in your distribution builds or releases?
What were the results? Under what *real* workloads did this
make a difference?
Cheers,
Carlos.