This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] [RFC] malloc: Reduce worst-case behaviour with madvise and refault overhead


On Wed, Feb 11, 2015 at 08:26:31AM -0500, Rich Felker wrote:
> On Mon, Feb 09, 2015 at 03:52:22PM -0500, Carlos O'Donell wrote:
> > On 02/09/2015 09:06 AM, Mel Gorman wrote:
> > > while (data_to_process) {
> > > 	buf = malloc(large_size);
> > > 	do_stuff();
> > > 	free(buf);
> > > }
> > 
> > Why isn't the fix to change the application to hoist the
> > malloc out of the loop?
> 
> I understand this is impossible for some language idioms (typically
> OOP, and despite my personal belief that this indicates they're bad
> language idioms, I don't want to descend into that type of argument),
> but to me the big question is:
> 
> Why, when you have a large buffer -- so large that it can effect
> MADV_DONTNEED or munmap when freed -- are you doing so little with it
> in do_stuff() that the work performed on the buffer doesn't dominate
> the time spent?
> 

It's less than ideal application behaviour.

> This indicates to me that the problem might actually be significant
> over-allocation beyond the size that's actually going to be used. Do
> we have some real-world specific examples of where this is happening?

In the case of ebizzy, it is the case that do_stuff is so small that the
allocation/free cost dominates. In the cases where I've seen this happen
on other workloads (firefix, evolution, mariadb during database init from
system) the cost of the operations on the buffer dominated. The malloc/free
cost was there but the performance difference is in the noise.

> If it's poor design in application code and the applications could be
> corrected, I think we should consider whether the right fix is on the
> application side.
> 

Could you look at v2 of the patch please? After discussions, I accept
that fixing this with a tricky heuristic is overkill. The second patch
just obeys trim threshold for per-thread heaps which is much simplier.
If an application is then identified that both requires trim threshold
to perform and the application is correctly implemented then more
complex options can be considered.

Thanks

-- 
Mel Gorman
SUSE Labs


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]