This is the mail archive of the glibc-bugs@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

[Bug libc/11261] malloc uses excessive memory for multi-threaded applications


https://sourceware.org/bugzilla/show_bug.cgi?id=11261

Ondrej Bilka <neleai at seznam dot cz> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|RESOLVED                    |REOPENED
   Last reconfirmed|                            |2013-12-12
                 CC|                            |neleai at seznam dot cz
         Resolution|FIXED                       |---
     Ever confirmed|0                           |1

--- Comment #16 from Ondrej Bilka <neleai at seznam dot cz> ---
> Therefore the solution to a program with lots of threads is to limit the arenas > as a trade-off for memory.

That is a bandaid not a solution. Still there is no memory returned to system
when one first does allocations and then allocates auxiliary memory like

void *calculate ()
{
  void **ary = malloc (1000000 * sizeof (void *))
  for (i = 0; i < 1000000; i++) ary[i] = malloc (100);
  for (i = 0; i <  999999; i++) free (ary [i]);
  return ary[999999];
}

When one acknowledges a bug a solution is relatively simple. Add a flag
UNMAPPED for chunks which means that all pages completely contained in chunk
were zeroed by madvise(s, n, MADV_DONTNEED).

You keep track of memory used and system and when their ratio is bigger than
two you make chunks starting from largest ones UNMAPPED to decrease system
charge.

This deals with RSS problem. A virtual space usage could still be excesive but
that is smaller problem.

-- 
You are receiving this mail because:
You are on the CC list for the bug.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]