This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/11261] malloc uses excessive memory for multi-threaded applications


------- Additional Comments From rich at testardi dot com  2010-02-10 15:52 -------
Last mail...

It turns out the arena_max and arena_test numbers are "fuzzy" (I am sure by 
design), since no lock is held here:

static mstate
internal_function
arena_get2(mstate a_tsd, size_t size)
{
  mstate a;
#ifdef PER_THREAD
  if (__builtin_expect (use_per_thread, 0)) {
    if ((a = get_free_list ()) == NULL
        && (a = reused_arena ()) == NULL)
      /* Nothing immediately available, so generate a new arena.  */
      a = _int_new_arena(size);
    return a;
  }
#endif

Therefore, if narenas is less than the limit tested for in reused_arena(), and 
N threads get in to this code at once, narenas can then end up N-1 *above* the 
limit.  The likelihood of this happening is proportional to the malloc arrival 
rate and the time spend in _int_new_arena().

This is exactly what I am seeing.

So if you can live with 2 arenas, the critical thing to do is to make sure 
narenas is exactly 2 before going heavily multi-threaded, and then it won't be 
able to go above 2; otherwise, it can sneak up to 2+N-1, where N is the number 
of threads contending for allocations.

If the ">=" in reused_arena() was changed to ">", then we could use this 
mechanism to limit narenas to exactly 1 right from the get-go.  That would be 
ideal for our kind of applications (that can't live with 2 arenas).

-- 


http://sourceware.org/bugzilla/show_bug.cgi?id=11261

------- You are receiving this mail because: -------
You are on the CC list for the bug, or are watching someone who is.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]