Bug 11044 - malloc dynamic mmap threshold causes 50%-100% increase in memory usage
Summary: malloc dynamic mmap threshold causes 50%-100% increase in memory usage
Status: RESOLVED WONTFIX
Alias: None
Product: glibc
Classification: Unclassified
Component: libc (show other bugs)
Version: unspecified
: P2 normal
Target Milestone: ---
Assignee: Ulrich Drepper
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-12-02 18:31 UTC by Tom Geocaris
Modified: 2014-06-30 20:35 UTC (History)
1 user (show)

See Also:
Host:
Target:
Build:
Last reconfirmed:
fweimer: security-


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Tom Geocaris 2009-12-02 18:31:38 UTC
We recently compiled and ran our application on Centos 5 which has glibc 2.5. We
found many of our benchmarks and regression runs resulted in 50-100% increase in
memory usage. We eventually pin-pointed the problem to malloc and after
examining the 2.5 malloc source code we found that malloc now adjusts the mmap
threshold dynamically. For our application, the dynamic adjustment does not
work. The end result is more fragmentation and many large malloc requests
results in more and more memory allocated.

Our application is Place-And-Route for backend chip design. It is very common
for an innovation of our application to run for several days using up to 32GB of
memory (depending on the size of the chip, i.e., multi-million gates). During an
 invocation of our application many different algorithms are executed with
different memory usage patterns (many very large memory allocations and frees.) 

We relied on the old malloc behavior to mmap these requests, because this tended
to reduce the fragmentation.

My guess is that this malloc change is probably just hitting people in userland,
because the Linux distributions use older versions of glibc. Again Centos 5, is
using 2.5.

I found some discussion of this change via google. And there was some
questioning as to whether this change was valid. Again, in our application the
memory usage pattern varies greatly and unless you have a good statistical model
of the memory usage pattern, it is hight unlikely to dynamically adjust the mmap
threshold and get it right.

Because we have the ability to turn off the dynamic threshold (mallopt), we have
a work around. But I thought it was best to give feedback on how this changes
was impacting our application.

Tom Geocaris
Atoptech
Comment 1 Ulrich Drepper 2010-04-05 05:08:00 UTC
No malloc can work perfectly in all situations.  This is why mallopt exist.  So,
use it.

Aside, you are not reporting anything about current releases.
Comment 2 Tom Geocaris 2010-04-05 16:10:01 UTC
Ulrich,

> No malloc can work perfectly in all situations....

That is the point. This change had no statistical basis proving that it improved
the malloc behavior on average (at least I'm not aware of any papers on the
subject).

> Aside, you are not reporting anything about current releases.

Correct me in I'm wrong, but this change is in the current release.

Regards,

Tom Geocaris
Comment 3 Ulrich Drepper 2010-04-05 18:31:40 UTC
(In reply to comment #2)
> That is the point. This change had no statistical basis proving that it improved
> the malloc behavior on average (at least I'm not aware of any papers on the
> subject).


> 
> > Aside, you are not reporting anything about current releases.
> 
> Correct me in I'm wrong, but this change is in the current release.
> 
> Regards,
> 
> Tom Geocaris

Comment 4 Ulrich Drepper 2010-04-05 18:34:28 UTC
(In reply to comment #2)
> That is the point. This change had no statistical basis proving that it improved
> the malloc behavior on average (at least I'm not aware of any papers on the
> subject).

Appropriate measurements were made.  We don't require "papers" for every change
made.  Go to academia if you you want that.

The changes showed improvements and there are ways to turn the changes off.  You
cannot possibly ask for more.
Comment 5 Tom Geocaris 2010-04-07 17:09:19 UTC
Ulrich,

I filed this bug to show, at least for our application, this change did not
result in improved performance. To the contrary, this change increase memory
fragmentation and in some case, memory usage when from 32GB to 64GB when running
our application. Our application ran fine on RedHat release 3 and 4. When we
ported to release 5, memory usage increased with out reason. From our standpoint
we would like the compute platform to be stable from one release to the next.

All I wanted to point out was that after reading the patch comments for this
change, such as,

+  The threshold goes up in value when the application frees memory that was
+  allocated with the mmap allocator. The idea is that once the application
+  starts freeing memory of a certain size, it's highly probable that this is
+  a size the application uses for transient allocations. This estimator
+  is there to satisfy the new third requirement. 

seem to me; weakly justifiable. Some applications may exhibit this behavior,
however, our application does not. And if one going to try and dynamically
adjust the mmap threshold, one should keep adjusting the mmap threshold over the
lifetime of the process and not clamp it so early, i.e., based upon the first
free...

Regards,

Tom Geocaris

Comment 6 Jackie Rosen 2014-02-16 17:43:45 UTC Comment hidden (spam)