little help with memory leak/management by the glibc 2.3.2?
Bruce Korb
bruce.korb@gmail.com
Wed Apr 9 03:35:00 GMT 2008
On Tue, Apr 8, 2008 at 12:22 PM, Steve Munroe <sjmunroe@us.ibm.com> wrote:
> jairo19@interhosting.us wrote on 04/08/2008 01:41:07 PM:
>
> > Hola:
> >
> > This demo program I wrote shows how much memory the process is using (as
> > given by the kernel /proc/self/status interface) before and after I
> > request and free memory.
> >
> > So the issue is that as I request and free memory of different sizes,
> > the process seems not to relinquish the full amount of requested memory.
> > My real world program seems to have a small loss of approx 100Kb for
> > almost each memory request/release cycle, and it needs to do this 500000
> > times, so you can imagine that I am running out of memory, but the
> > program is releasing all the memory it asks for. In both cases valgrind
> > does not report any leaks.
> >
>
> Working as designed. You can read about malloc's internal design by reading
> the comments in the source:
>
> http://sources.redhat.com/cgi-bin/cvsweb.cgi/libc/malloc/malloc.c?cvsroot=glibc
>
> You may be able to adjust things more to your liking using the mallopt
> interface:
>
> http://www.gnu.org/software/libc/manual/html_mono/libc.html#Malloc-Tunable-Parameters
Despite this being a development list, I also am having a little problem.
It seems that on an occasional basis another 128K gets anonymously mapped
into my address space with an extra 896K mapped as unreadable and unwritable.
It would not be so bothersome, except that eventually allocations
fail. And, no,
it is not due to failing to "free()" the space. It seems as if the
memory is filled
with some sort of malloc pointer spaghetti
(gdb) x/64x 0x15e00000
0x15e00000: 0x15e00010 0x00000000 0x00021000 0x00000000
0x15e00010: 0x00000000 0x00000000 0x00000000 0x00000000
0x15e00020: 0x00000000 0x00000000 0x00000000 0x00000000
0x15e00030: 0x00000000 0x00000000 0x0000004a 0x00000000
0x15e00040: 0x00000000 0x00000000 0x00000000 0x00000000
0x15e00050: 0x00000000 0x15e004c8 0x00000000 0x00000000
0x15e00060: 0x00000000 0x15e00588 0x15e00590 0x00000000
0x15e00070: 0x00000000 0x15e0006c 0x15e0006c 0x15e00074
0x15e00080: 0x15e00074 0x15e0007c 0x15e0007c 0x15e00084
0x15e00090: 0x15e00084 0x15e0008c 0x15e0008c 0x15e00094
0x15e000a0: 0x15e00094 0x15e0009c 0x15e0009c 0x15e000a4
0x15e000b0: 0x15e000a4 0x15e000ac 0x15e000ac 0x15e000b4
0x15e000c0: 0x15e000b4 0x15e000bc 0x15e000bc 0x15e000c4
0x15e000d0: 0x15e000c4 0x15e000cc 0x15e000cc 0x15e000d4
0x15e000e0: 0x15e000d4 0x15e000dc 0x15e000dc 0x15e000e4
0x15e000f0: 0x15e000e4 0x15e000ec 0x15e000ec 0x15e000f4
Each of the 20-odd 128K segments looks the same, only the "0x15e"
prefix changes.
1. Is this a known bug with 2.3.2?
2. Where does this come from?
3. How can I get around this, since the ultimate allocation failure causes
my program to roll over. (Very inconvenient!!)
Any hints? Thank you in advance.
Regards, Bruce
Oh, in this particular core, "0x15e" is the first and 0x174 is the last:
(gdb) x/64x 0x17400000
0x17400000: 0x17400010 0x00000000 0x00021000 0x00000000
0x17400010: 0x00000000 0x00000000 0x00000000 0x00000000
0x17400020: 0x00000000 0x00000000 0x00000000 0x00000000
0x17400030: 0x00000000 0x00000000 0x0000004a 0x00000000
0x17400040: 0x00000000 0x17400698 0x174005f0 0x00000000
0x17400050: 0x00000000 0x17400570 0x00000000 0x00000000
0x17400060: 0x00000000 0x17400a10 0x17400618 0x00000000
0x17400070: 0x00000000 0x17400758 0x17400618 0x17400074
0x17400080: 0x17400074 0x1740007c 0x1740007c 0x17400084
0x17400090: 0x17400084 0x1740008c 0x1740008c 0x17400094
then the program faulted because the virtual address space was used up
(including all the unusable 896K's --> about 20 Meg wasted):
msg = malloc(buf_size);
memcpy(msg, fixed_buffer, msg_size);
(gdb) p msg
$1 = (eaipc_msg_t *) 0x0
More information about the Libc-alpha
mailing list